When I co-founded Caringo with Paul Carpentier and Mark Goros 13 years ago, the IT landscape was different. In storage, you had large monolithic devices purpose built for very specific applications. Innovation was measured in cost and speed (IOPS) and the primary interface to these devices was based on standards (NFS and CIFS). Many systems were closed, RESTful interfaces did not exist (we were grateful when Amazon announced S3, so we had company) and IT infrastructure was sold as a package with hardware and software combined, but digital content was growing.
Here is what the world looked like in 2005:
- Caringo was launched with the vision to change the economics of storage
- Raw storage was around $0.70/GB with hard-drive sizes in the 250-400 GB range
- Amazon AWS was not officially launched and Microsoft Azure and Google Cloud were still years away (launched in 2008)
- Facebook was only 1 year old with about 1 million users and closed it’s initial round of financing
- Netflix had 4.2M subscribers to it’s DVD delivery service, streaming was still 2 years away
- Twitter didn’t exist and YouTube had just launched.
What we saw was enterprise storage requirements evolving from specific application needs to content driven needs. Capacity growth, file growth and over the web access requirements were straining file systems. We saw the need to disconnect storage software from hardware so that organizations can take advantage of the advancements in standard x86 servers and protect data without RAID. We also saw the need for the native interface for content focused storage to be based on HTTP, the language of the Web.
Fast forward 13 years to 2018:
- Caringo has enabled hundreds of organizations worldwide to store billions of objects
- A 3TB drive is $75 or $0.02/ GB and you can get a 12TB hard drive
- AWS generated $17.4B in revenue for Amazon, on track for $20B in 2018
- Facebook has 4B active users daily and 2.13B active users per month
- Netflix has 118M subscribers, is about to overtake Cable Subscribers and has an $8B content budget
- Twitter has 1.3B accounts and 500M tweets are sent out daily, that’s 6K tweets every second
- Every day on YouTube 300 hours of video are uploaded and 5B videos are watched
It’s exciting to see the fire we started in 2005 taking hold; it is changing the whole storage industry for the better. Object storage has become a part of everyday vocabulary, HTTP as a storage protocol is now an accepted practice, large volumes of data can be stored cost effectively, and enterprises can leverage new generations of hardware without expensive migrations. We have achieved much of what we set out to do.
The next phase is for the industry to not only see Software-Defined Object Storage as cheap and deep, but as a new way of managing data and solving problems. There is untapped potential in metadata-driven applications replacing databases and leveraging big data and search engine technology to build applications more rapidly and cost effectively. Using Swarm pure Object Storage, data can now be self describing, portable, and live forever separate from the hardware on which it resides. This has wide ranging ramifications for the deluge of data coming from IoT and the ability to leverage existing data sets across the enterprise. As companies wake up to the new possibilities and let go of the old model—managing the metadata in databases and storage being dumb files systems—they will reap the benefits of object storage as a data management platform and create competitive advantages for themselves while reducing costs.
What are the characteristics of each storage tier and when should you use NVMe, SAN, NAS, Cloud, Object, or Tape Storage? More Details »
While object storage isn’t a panacea, it is an increasingly important storage technology that enables on-demand access for video workflows. More Details »