I just read a blog post by one of our competitors on the definition of software-defined storage vs software-based storage vs arrays and what technologies fit where. The definition presented unsurprisingly fits nicely into supporting the current cash cow for major storage array vendors. Whether a storage technology is classified as software-defined, software-based or just storage software we think it should solve problems and not just treat the symptoms. 

The drivers – cloud, mobility, social, big data and the Internet of things

The market trend that Gartner and IDC call the third platform is what is driving the need for a paradi
gm shift in storage. At the heart of this is that expectations of storage have significantly changed within the last few years. The third platform is presenting a new challenge – unpredictability is now the norm. Not just in capacity growth but in number of files, variability in file size/type, in workload, use case, application, access method, number of users and even in the value of information. To use an old cliché “you don’t know what you don’t know.”

The cause – legacy storage (i.e. storage array, tape, file system, and tedious management) equals complexity and CAPEX

Without a doubt the single biggest issue our customers communicate to us is the complexity associated with their current legacy storage. They tell us about the inability to scale beyond a few Petabytes, botched upgrades and updates, tedious manual content migration, restrictive access and the lack of support for new cloud applications. Tech refresh time and forklift upgrades are particularly dreaded.

A close runner up to the complexity issue is the economical impact of large front-loaded CAPEX spend for technology that is supposed to last half a decade.

The reason – the third platform has changed the expectation of storage

The reason that the old legacy storage and storage array method is becoming a strategic disadvantage is that your employees and your customers have more choices and you can’t control their workloads and use cases. They can easily go plop down their credit card and open an AWS account, or share their large files via Box, or even go and download OpenStack and hack together a quick and dirty complete cloud solution. These options are quick, they provide (near) instant gratification and they can be billed as OPEX.


Only treating the symptom = “software-defined” management layers on top of legacy storage

Now I agree storage arrays will be with us for a while. A lot of money has been invested in them and I’m not saying throw that away but you do need to treat it as a sunk cost. You may never get the full value of the money you spent a few years ago because your needs or more specifically the needs of your business, employees and customers are moving too fast.

The cause of many current storage issues, the cause of why you can’t handle unpredictability in growth, apps, workloads and use cases is because of legacy storage. The only way to solve this is to stop using legacy storage and move to a complete truly “software-defined storage” approach that supports and virtualizes the resources of any standard hardware underneath in a way that you can continually evolve the hardware over time and support any known or more importantly unknown workload or use case.

pain-relief21Where to go from here?

Making the decision to move to a new paradigm isn’t easy but more organizations are making the switch. Your employees and customers are using the cloud, are used to instant gratification and are used to storing everything they create. Companies in every industry are using big data to out-maneuver their competitors; and unpredictability is becoming well… more unpredictable. So…will holding on to legacy storage, legacy processes, and continuing to bleed huge chunks of capital every 5 years help or hinder you?

We are working hard every day to make the transition to our storage software as seamless as possible… in my next post I will tell you how, and as always fell free to contact us directly if you have any questions.