Last week, I blogged about both traditional backup strategies and some of the newer paradigms for data protection. Here at Caringo, we provide our Swarm object-based storage (available as software only or as a complete software and hardware solution, our Single Server Appliance). Our products have an interesting relationship to the traditional notions of backing up data. On the surface, data protection looks very different in our world; but in reality, many of the same principles still apply.
The 3-2-1 Backup Rule
You are probably familiar with the 3-2-1 backup rule. It dictates that you have three or more independent copies of your data, you store the copies on two different types of media and you maintain one of those backup copies off site.
The Principal of Maintaining Redundant Copies
For example, backing up data is all about having redundant copies of your files so that if the primary copy is destroyed you have an option to fall back on. In our Swarm object storage technology, the software is continuously monitoring the data to ensure multiple copies are available at any given time.
Maintaining a Copy Off-Site
A best practice for backing up data includes keeping at least one copy off-site, and many object storage solutions (ours included) make it easy to do just that. In Part 1 of this blog, I noted that there are a lot of off-site storage options for backup data⏤from tape to an operational disaster recovery site that can feed data back at a moment’s notice. And of course, there are cloud solutions like Amazon (AWS), Google, Azure or a private cloud, such as those that can be built with Caringo Swarm.
Cost Considerations for Protecting Data
With low costs for “cold” storage and high levels of baked-in protection, the popularity of cloud solutions is really not all surprising. But for companies with legacy systems, determining how to protect their data in the cloud may seem a daunting task. (Check out our Protecting Data with Caringo Swarm Object Storage whitepaper to learn how our built-in, continuous data protection works like “a ship carrying and protecting your data as the river of hardware changes over time.”)
The challenge of determining how to protect data in the cloud was one of our motivating factors at Caringo for the recent release of our FileFly 3.0 Data Management Tool, which now allows for automated backing up of data to a Swarm cluster or to AWS, Google or Azure clouds—according to your policies and your timing.
Policy-Driven Data Management with FileFly
With FileFly, data movement is governed by policies that describe the protection needs of the data as well as which back-end data store or stores best fit the business needs for that data. For example, frequently accessed data can be backed up every few minutes to both an object storage cluster and to the cloud, while infrequently accessed data is perhaps sent to cold storage once a week. FileFly allows for these sorts of policies to be defined in advance so administrators can move on to other tasks without having to worry about overseeing arduous backup tasks. It’s your data, protected your way.
To learn more, watch our Tech Tuesday: Using FileFly to Manage Your Data with Azure, Google, Amazon or Swarm webinar on demand or contact us to talk to a storage expert or to schedule a demo.
In this LightBoard video, Storage Switzerland’s Lead Analyst George Crump and Caringo CEO Tony Barbagallo discuss a real-world customer architecture to show how Swarm object storage can be used to simplify the environment and reduce … More Details »
What are the characteristics of each storage tier and when should you use NVMe, SAN, NAS, Cloud, Object, or Tape Storage? More Details »