As work and workflows become distributed, the way organizations evaluate tier 2 storage is shifting. What we have all learned recently is the importance of making sure everything you create is online, searchable and accessible. I call this the ‘everything-online’ approach. This approach is challenging the standard “speeds-and-feeds” and “cheap-and-deep” paradigms for storage, and is introducing new requirements for intelligent data management and access. Let’s take a deeper dive into what that means and how this impacts your ability to keep everything online.
Scalable Data Management Needs A Centralized Repository
When you are in a distributed environment, a centralized location for data is critical. Having disparate storage systems across your organization is not only difficult to manage from an operational perspective, but it also presents challenges in finding the data you need when you need it. To solve this issue, organizations look to asset management solutions or traditional data management applications. These solutions are able to keep a record of where data is located and offer the ability to tag assets so they are searchable. From a workflow perspective, this is a good start; however, you are still dealing with managing disparate storage systems. And, what about future applications that need to access this data? Will they be able to interface directly with your existing data management applications?
As we move into a world where everything must be online, searchable and remotely accessible, uncertainty is the norm. Meaning you don’t know what data you will need, when you will need it or even who/what will be making the request (user, application or device). By leveraging a tier 2 storage solution with built-in data management capabilities, you cost-effectively supplement not only your storage capacity, but your existing investment in asset management and data management applications. This tier 2 storage can provide a holistic method for searching, categorizing and accessing all of your data. I wrote a blog that details this very topic: How Does Object Storage Facilitate Remote Administration and Workflows?.
The Two High-Level Components to Access
For any method of transportation, you need two things. The first is a road or path of some form and then a method of transport (car, bike, legs, etc.…). This concept can also be applied to transporting and accessing data. The road or path would be the bus, LAN or WAN and the method of transport and access would be the protocol/interface. Both are important from a storage and data access perspective. Historically, protocols/interfaces were developed to access data locally either within a system via the bus or via the LAN. In this scenario, internal networks and the speed of the storage system greatly impact the ability for specific applications to access data. In fact the primary application often manages data access.
Enabling an Everything Online Approach
Now consider the requirements if you need to keep everything you create, all of your data and assets, online. This means that your data must be accessible from any location. In this scenario, throughput, network speed, and the ability to quickly find a file and deliver it over HTTP are the gating factors for enabling efficient access to data. Check out our Tech Tuesday webinar on How Object Storage Makes it Easier to Access your Data from Anywhere for more information.
The Tech Tuesday webinar focuses more on storage than the network (after all, Caringo is a storage company). However, Caringo has an expert in architecting scalable web-based services joining them for their September 24 Brews & Bytes webcast. Philip Mellor-Buckley, BT’s Infrastructure Designer for BT Services Platform, will join Tony Barbagallo, Caringo’s CEO and me to discuss trends in connectivity and services to businesses and consumers. They will focus on the “everything-online” approach. Click here to register for the live event or to watch the on-demand recording.