As the internet of things proliferates, it becomes increasingly clear that basically everything in the world can generate endless streams of data. So long as it involves some kind of measurable activity, you can set yourself up with constant data flow that, while potentially useful, tends to be way too large to store, much less read.
IT leaders have run up against this problem more than ever in the past few years; the amount of digital data worth capturing, managing and organizing increases exponentially every year, and no amount of storage or big data processing seems capable of handling all of it.
According to a study conducted by Primary Data, IT administrators are now stuck in a bind with over half of them struggling to use more than 10 different types of data storage to handle all of their information.
“Where you have multiple architectures and solutions that don’t talk to each other, you have deep complexity and inefficiencies,” explained Vish Mulchand, senior director of product management and marketing at HPE Storage. “You have, essentially, technology islands, and that’s probably the single largest factor driving complexity. Islands may be great for vacations, but not for data storage.”
So how do IT leaders increase their storage efficiency while reducing their storage complexity? Here’s a list of important factors to keep in mind:
Avoid latency. Every layer you add to your storage environment increases the potential for latency.
“The key to sorting through anything storage-related in the datacenter is to concentrate on latency, not on IOPs (I/O operations per second). IOPs can lie, but latency never does,” explains Edward Haletky, managing director of The Virtualization Practice. “If you can measure latency at all levels, you can pinpoint an issue where latency increases.”
Save yourself from latency of issues by performing traditional mechanisms like encryption, compression and deduplication before they hit the disk as opposed to after they’ve been saved to the disk.
It’s also important to take location into consideration. “Considering location when creating storage helps streamline interconnection for your hybrid cloud strategies,” recommended Chris Sharp, CTO of Digital Reality. “By identifying co-location environments to house your storage in close proximity to public cloud compute nodes, [you] can make a connection via a cross-connect to the public cloud, delivering extremely limited lag time without the need for expensive network tethers. With this approach, the overall architecture requires a lot less components to be successful, significantly reducing complexity and reliance on the internet.”
Moving storage to the cloud has also been enormously helpful for businesses hoping to increase the organization and efficiency of their data storage.
“Eliminating the datacenter altogether is the ultimate means to reduce its complexity,” explained Jaspreet Singh, CEO of DRUVA. “Moving storage to the cloud eliminates the need for hardware and all the management and overhead required for its infrastructure.”
According to IT consultant and blogger Ian Apperley, most of his clients are bypassing complicated and expensive infrastructure by moving straight to SaaS solutions.
That’s not to say that the cloud doesn’t necessitate a certain amount of work itself: “Public cloud use introduces its own challenges, such as data migration, performance, security and latency,” posited ClearSky Data CEO and co-founder Ellen Rubin. Rubin suggests that administrators should take advantage of managed services to overcome these kinds of obstacles.
Find good software. Software-defined infrastructure, or SDI, “allows you to change your perspective from managing efficiencies in cost to flexibly delivering efficiencies in time-to-value for your business,” explains Tim Cuny, VP of solutions for CMI. “The benefits from a software-defined, services-based approach are significant, including reduced hardware costs as you now buy inexpensive, commodity storage.”