Does anyone remember the children’s book The Hungry Thing? It’s a simple story about a Hungry Thing coming to town, sitting on his tail and pointing to a sign around his neck that says, “Feed Me.” He asks the townspeople for “Shmancakes,” which any smart preschooler knows rhymes with pancakes, and so goes the story. Why do I bring this up? Well, because Swarm can feed data to other “hungry” clusters for collaboration and disaster recovery.
Feeds in Swarm is the name of our object routing mechanism that simply uses your internet connection to distribute data to and from a source cluster to a destination cluster. Now, there are two other reasons for Feeds in Swarm, but we’ll talk about them later.
Swarm Storage protects against various disk failures and other hardware failures that might take out a machine, but it can’t protect against a true disaster like a flood. Feeds enables that protection by making copies of your data elsewhere. What gets replicated is a high fidelity copy of the complete object, metadata and all, so it’s accessible and usable in any cluster that the object resides. Feeds provide a backup and disaster recovery solution for environments with a network connection between the source and target cluster—the internet works quite well for Feeds. In these environments, Feeds operate continuously in the background to keep up with source cluster intake. When Swarm recognizes new or updated objects in a domain that has been configured to be replicated, it copies these objects to an internal queue for transport.
Replication can be as simple or complex as you required. You can use Feeds to create an offsite DR cluster or even create n-way replication for collaboration and data locality. I will also mention that data is replicated on a domain-by-domain basis, so you can choose what data to replicate and to where. Check out the diagram for a couple of examples:
And you can even monitor your feeds from the Swarm UI:
I mentioned two other uses for Feeds earlier. Here they are:
First, Swarm also uses Feeds to speed up searching through objects’ metadata. Metadata Search provides real-time metadata indexing and ad-hoc search capabilities within Swarm by name or metadata. The integrated Elasticsearch service (view this on-demand webinar for more on Elasticsearch) collects the metadata for each object and updates the search database in your Swarm network. When you update an object’s metadata or create a new object, domain, or bucket, the service collects only the metadata and not the actual content. Once metadata is indexed, I can search through all of the metadata in the cluster, both system-defined as well as custom metadata. If my cluster consists of surveillance video for instance, I can create a search to identify all the surveillance videos from the back parking lot of corporate headquarters for the last 24 hours. Watch this short video to learn more about metadata and how it is used in Swarm object storage.
Second, we took the replication capability of Feeds, leveraging our original methodology of sending data between clusters, and have extended that to Azure Blob storage. With Feeds, you can now replicate objects on a domain-by-domain basis to native Azure blobs. Once the data is on Azure, you can leverage Azure’s compute, data-protection and long-term archive services. All data that remains on-premise is protected and managed by Caringo Swarm.
In other words, Feeds satisfies any data “hungry” process and is a robust, standard feature of Swarm enabling replication for collaboration, disaster recovery and search indexing. And, it’s just one of Swarm’s many standard features. To learn more about some of these unique features, I recommend reading the Emergent Behavior: The Smarts of Swarm whitepaper. If you have any questions, don’t hesitate to contact us.