Why is everybody talking about active archives? Data protection begins with the concept of having a copy of the data that resides on a primary storage system safely stored offsite. Traditional backup and restore software can be used to perform this task, but there are several problems with this class of technology. For example, during the weekly and monthly full backups, the destination has to be local to keep up, then the backups have to be replicated to a remote site for safety. But that’s not the worst part. How is an item of data retrieved from the backups? How are the contents of the backups searched? What if the amount of data is so large that it cannot be backed up or replicated practically?
There are many compelling use cases for Caringo object storage. Swarm provides data protection, management, organization and search at massive scale. So, you no longer need to migrate data into disparate solutions for content delivery, ongoing analysis, and dynamic preservation. You can consolidate all your files on Swarm, find the data you are looking for quickly, and reduce total cost of ownership by continuously evolving hardware and optimizing use of their resources.
One of the use cases where Caringo shines is in use as an active archive, offloading mission-critical Windows servers in the healthcare industry, because it overcomes the pain and the problems inherent in both backup and restore and replication technologies. What’s really going on in an active archive? Here’s a quick thumbnail. Applications that run on Windows use the NTFS file system. The NTFS file system lives on a volume, which could be a direct-attached drive in the local server or a LUN on an iSCSI or Fibre Channel SAN. Users and applications are accustomed to specific pathnames to access files on the NTFS file system to which they have been granted access. The goal is to offload primary storage without needing to make any changes to applications or for users.
Caringo Swarm uses the NTFS file system’s reparse point technology to make a copy of the file onto Swarm and to leave behind a shortcut. When a user or program goes to access a file at a specific pathname, the reparse point is triggered and the file is retrieved from the Swarm cluster immediately. Policies may direct Swarm to move files older than a certain age or of a specific type or in a particular directory to the active archive on a continuous basis. It is not unusual for up to 80% of the expensive direct-attached or SAN-based storage to be freed up, which more than justifies the cost of the Caringo storage system.
The files on Swarm are stored at a quarter of the cost and are better protected, based on Elastic Content Protection technology that uses advanced M of N erasure coding and replication together for ultimate protection that is effortless to operate and superior to ancient and inadequate RAID technology. Because it is working continuously to protect healthcare information, it is easier for Swarm to keep up with the workload. Searching for a piece of data is as easy as on any local Windows file search—nothing special or new to learn. And lastly, I would be remiss if I did not mention that the time and resources needed to do backups of the Windows servers or keep replicas off site are also reduced by up to 80%.
Caringo software-defined object storage is easy to deploy and maintain and can scale to meet any need with pay-as-you-grow economics using standard servers. To hear more about how Caringo Object Storage is being used in real-world use cases, join our VP of Product Tony Barbagallo on May 26 for the Protecting Healthcare & Life Sciences Data webinar.
The nature of enterprise data is rapidly changing and existing storage infrastructures can’t keep up. Network Attached Storage (NAS) devices were designed for performance and single-site collaboration but file creation and access is different now. … More Details »
Take a deep dive on metadata and how it can be used to unlock the intelligence potential that resides in large data storage repositories More Details »