In today’s world, data is not deleted and enterprise data is growing exponentially. With sophisticated tracking, demographics information and analytics, any enterprise organization with a website—and that’s every enterprise—is gathering enormous amounts information on their customers, site visitors, usage, buying patterns…and, well, you name it. Depending on which analysts you believe, there will be 7 to 40 zettabytes of data by 2020 with 80–90% unstructured, and much of that data is is largely inactive.
Inactive unstructured data includes photos, videos, log files, zip files that are extracted but never deleted, and even the 17 previous versions of the presentation I’ve been working on for this week’s BrightTALK webcast: Solving Issues with Microsoft Windows File Server Bloat.
Data, in general, tends to remain on the device where it was first stored. And if that’s your primary storage that is meant for ACTIVE workloads, like transaction processing, then this inactive data will quickly fill up your file servers and create a number of issues for your organization, including:
- Performance degradation as those filers fill up
- Storage Silos, and data on wrong storage
- A noticeable increase in file backup times
- An even more noticeable time to recover that data in case of a disaster or hardware failure
How can you solve these issues? You could add additional primary storage, but this has the downside of leaving it as an exercise to the IT staff to load balance the files servers or implement complex clustering software. Another option is to implement manual processes to move this inactive data off of primary storage using various software utilities, like rsync or robocopy, but that still leaves a bit of a nightmare should you need to recover that data.
However, at Caringo, we believe that the best way is to completely automate this process and, in the process, reduce both the hardware cost of adding primary storage as well as the associated IT costs. And Gartner agrees…
“Cost optimization has become a critical and continuous discipline for many CIOs. In the age of digital business, cost optimization demands a mix of IT and business improvements to lower operating costs, drive more value and prepare for the digital future.”
Our FileFly solution does just this. With FileFly, you can migrate data and files—based on any policy or policies you choose—to a secondary storage cluster on Swarm Servers or commodity hardware powered by Caringo Swarm software-defined object storage. For instance, you may choose a policy to migrate all files that have not been accessed within 30 days and a second policy to migrate all ZIP files every 24 hours. That could take upwards of 80% of the data off your primary storage server. Of course, if a user or application does someday need to access a migrated file, FileFly collects it from secondary storage and transparently returns it to its original place on the primary filer.
What are the business benefits? Imagine saving 75% on your overall storage TCO. Ok, you don’t have to imagine. This is the reality that most of our customers experience, along with restored primary storage performance and an end to storage silos and data loss.
If you are interested in learning more, register for my upcoming BrightTALK webcast on February 23: Solving Issues with Microsoft Windows File Server Bloat. I’ll explain how our customers are reaping the benefits of our new FileFly Secondary Storage Platform.
The performance figures achieved are results of Caringo Swarm’s underlying parallel architecture. Let's describe the infrastructure, methodologies and results achieved. More Details »
STFC uses Caringo Swarm object-based storage software to streamline data management and data access for long-term distribution. More Details »