Earlier this week, I spoke at Storage Visions, an event focusing on storage in the Media & Entertainment (M&E) space. The M&E sector drives home the realization that the expectations of storage have changed and the rate of change is increasing. Today, more than ever before, you have a mix of storage for pre/post production, backup, archive, collaboration and content delivery. At Storage Visions, there was a lot of talk about the speed of content creation and the entire production workflow changing in support of extremes like rapid, full-season releases (leading to binge watching) all the way to providing efficient pre/post production workflows for shows shot in remote areas that lack bandwidth or even power (like Vice and National Geographic). This magnitude of different use cases and consumption patterns combined with the sheer volume of content being created is increasing the velocity of change.
Discussion at the show included a number of hot topics, including how to move large data sets efficiently worldwide, cloud storage and object storage. I was pleasantly surprised to see that most people at the event knew of object storage and how it benefits them. However, the M&E industry as a whole is struggling with moving between the file-based and object-based worlds.
One of the panelists in the “Storage End Users on The Long Term” session summed up this issue well by saying, “The worst thing about a file system is that humans can access it. People put stuff in weird spots.” Putting all the tech jargon aside, this really sums up one of the main issues with file systems—it provides a rigid structure that was meant for humans to read. You need to decide where to put stuff and in today’s information-age, applications, devices, and machines are increasingly making those decisions for us…and that’s a good thing.
Media, access methods, consumption patterns, applications, types of content, preservation methods and interoperability requirements will always be in flux, therefore, storage and IT infrastructure in general must continuously evolve. The only way to continue to adapt is to have content (or data) the only constant while storage media, access methods and applications all are continually added or replaced.
So how do you manage this increase in the velocity of change? You make sure you can move in any direction and add resources at the rate your use case requires while making sure that data integrity is preserved. This is where object storage comes in. By using a key value method of storage, a RESTful interface and auto-healing, object storage was designed for content integrity and rapid access by applications and devices—without the limitations of human-readable structures like directories, folders and filenames (like a file system). It was also designed for scale out of capacity and throughput. This is one of the reasons TechTarget SearchStorage named Object Storage one of the hottest technologies for 2016.
If you are struggling with managing the rate of change for content storage, access and preservation, you should look at object storage. If you would like to learn more, join me for our upcoming webinar: Object Storage: Using next-generation storage to solve today’s data problems. I’ll be discussing object storage, common architectures, where it is used, benefits, and use cases. We will also cover how object storage can help you keep up with the increasing velocity of change in data storage and access requirements. Spoiler alert: there’s a good reason object storage made the “hot” list for 2016!
To keep pace with today’s media and digital asset management workflows, you need a cost-effective secondary tier of storage (active archive) that provides instant accessibility and unrelenting data protection—while scaling to store petabytes of unstructured … More Details »