Every autumn since my 13th birthday, I fall in love. That’s because I live in New England. For about a 6-week span, there is no better place to live. Great weather of cool nights and warm days combined with the beauty of the leaves changing color. It really is the best time to be here…but all that comes with a price. At the same time that I’m falling in love, my hate for the falling leaves grows more and more. Living in New England, it’s a continual battle to deal with the leaves blanketing the lawn. Sure, I can rake up the massive amount of leaves that shed from my trees into piles. After all, it’s cheap and good exercise, but I don’t have the time to spend hours upon hours fighting the never-ending battle. The only way to keep up with it is to invest in the right solution to get the job done in an efficient manner. In this case, it’s using a riding mower and backpack blower to get my lawn clear of the dead leaves.
How to Handle Spikes in Data and Changes in the Wind
One consistent challenge customers share with me is as soon as they feel confident that they have the ever-increasing amount of data under control, a new spike forces them to change their approach to this onslaught. Even with proper capacity planning, it’s an uphill battle to harness all the demands these spikes cause.
In the past, the simple answer was “add more drives.” Or add another shelf of drives. Or add another chassis/JBOD. Or add another storage system. Bandage after bandage, and requiring more and more management. This approach can be the cheapest route, but like raking, it’s not the most efficient or best long-term approach.
What is the Right Solution for Handling the Data Deluge?
Object Storage, and specifically Swarm, is the right technology to get you out of the raking leaves business and into the modern approach to intelligently managing and storing the deluge of data. Today more than ever, IT professionals are being asked to do more with less. And, no matter how great a new technology is, if it doesn’t save time along with cost, it is missing a key benefit that all IT administrators are looking for. A trend that I see over and over again is that customers gain hours of time back to their week with our platform.
Swarm is able to give back time because of a number of reasons, but most importantly is the flexible architecture, self-healing properties, ease of scale, and intelligent, integrated data management. Let’s examine the impact of these areas.
Adaptive Architecture and Encapsulated Metadata
A mesh architecture with parallel processing eliminates any single point of failure. Every storage node can complete 100% of the read/write operations, so failed hardware does not eliminate the ability to access data. Also, the metadata is encapsulated with the data so there is no external database to manage. Lastly, there is no underlying Linux or Windows OS to maintain on the storage nodes. As you can imagine, having to administer storage rings with different profiles, databases and operating systems eats up significant time. With Swarm, those complex requirements are removed and time is returned to your day.
Self-Healing Storage with Continuous Built-In Data Protection
There is no way to stop hardware from failing over time, especially aging hard drives. And of course, they never fail at a convenient time. Swarm storage nodes work collaboratively to recover from failed hard drives by proactively rebuilding missing data segments. And since it’s only rebuilding the missing segment, recovery is measured in minutes and not hours/or days like traditional RAID-based solutions. Best of all, this is all automated and requires no administrative intervention. Watch our webinar on Data Resilience & Recovery in Swarm Object Storage to hear more on this topic.
Additionally, when replacing the failed drive, with Swarm, you can put in any size drive and utilize its full capacity. And you can do this on your schedule; since the system has already healed itself from the failed hard drive, it’s not an emergency.
No other system makes it easier to scale. Add in any x86 server to the cluster and step away. Within 90 seconds, the server is recognized by the rest of the cluster and it begins to incorporate its resources to the storage pool with no administrator intervention. The system will begin load-balancing across the cluster to best utilize the hardware for peak data protection. Also, since it’s a parallel processing environment, every time a new node is added the performance and data durability increase. Our AI & Machine Learning: The Smarts of the Swarm whitepaper explains more about how this works.
Storage Administration Simplified
Knowing the health and performance of your cluster at a simple glance is critical to saving time. Adding in proactive alerting and remote monitoring allows administrators to focus on other areas of their day and have confidence that if something needs attention, they will be quickly notified. Another great benefit for our customers is that they don’t need professional services for any expansions, upgrades or hardware retirements nor do they need to schedule a maintenance window during off hours. All administration happens during runtime with a few simple clicks of the mouse. Typically a cluster deploys in a few hours and is managed in just minutes a day.
You can learn more about best practices for object storage installation and management by watching this webinar.
How Can You Get Started with Swarm?
To get started with Swarm, contact us for a demo. We will be happy to discuss your specific use case to help you determine if Caringo Swarm is the right tool to handle your data explosion.
As I wrap up this blog and look out the window, I see more leaves falling. Luckily I have the right tools to handle this challenge. I hope you are also prepared!