How to Evaluate Object Storage in Your WorkflowsI was on an analyst briefing call a few weeks ago and something the storage analyst said really stood out. I am paraphrasing a bit, but the comment was “…the conversation is always NAS OR Object Storage and it really should be NAS AND Object.” The analyst who said it has a solid handle on the differences between the two types of storage. That said, to those who aren’t students of storage nuances, understanding the right storage tier to use at the right time is a daunting task. Especially when storage vendors claim they do it all. And, as anybody who has ever tried to drill a hole, crimp coax cable, or cut a piece of wood knows, any task is a lot easier with the right tool. So, how do you find the right tool for your needs? Well—you first have to understand what you are trying to achieve and architect and test a solution, or you reach out to a trusted advisor.

Understanding what you are trying to achieve can be the most difficult part. Not because you don’t know your workflow, but because of how vendors (us included) communicate features, functions and benefits. Often, features are put on a matrix with check boxes, and if it is mentioned on a vendor’s site, the box is “checked.” S3 support (check), geographic replication (check), NFS support (check), versioning (check) and so on and so forth. Pretty straight forward, right? Well, not really. As we move to a software-defined data center and POSIX plus RESTful workflows, variability is everywhere. For instance, S3 is now a defacto standard, not an actual standard that is verified by a governing organization. To that point, all storage vendors and application providers support S3 differently. This means that features, performance and general functionality vary across storage types and applications. From the vendor and application perspective S3 may be supported, but it may not fit your specific workflow requirements. So what do you do?

Some organizations have the necessary skill sets in house to perform the proper analysis and architect and implement the right solutions. At Caringo, we will help wherever we can if this is the path you want to take. In fact, we just announced an appliance, Swarm Single Server, to make implementing Object Storage easier. The Swarm Single Server takes hardware and reference architecture questions off the table, However, application interoperability and complete workflow integration for your environment still needs to be certified. If you don’t have all the necessary skill sets and components in house, this is where a trusted advisor comes in.

One of the best examples of this I have seen is what our partner, Melrose Tec, did at their Open House last week. They combined three different storage solutions, hooked in the necessary content ingestion and editing workstations, networked it all via a Mellanox 100 Gbit switch, and had all servers and software user interfaces on display in their lab. In doing so, they saved their clients a tremendous amount of time, validated that all technologies played nice together and that the necessary performance was achieved. They combined NVMe storage (Excelero), a GPFS file system layer (Pixit Pixstor) and active archive storage (Caringo Swarm) and demonstrated a complete workflow. They did this by having one of their customer’s colorists come in and work on 8K footage on Resolve in real-time. Then, they demonstrated tiering this footage to archival storage and instantly calling it back.

As always, we at Caringo are here to help determine if Swarm fulfills your workflow requirements, but as Melrose Tec’s demo shows, we are sometimes just a piece of a broader solution. We have a number of partners that can help and we recommend browsing our resource section, especially our webinars, for up-to-date information on current trends and workflows. And, of course, you can always contact us and speak with one of our storage experts.