Supercomputing, or High Performance Computing (HPC), is now over 50 years old. Needless to say, HPC and supercomputers have come a long way since the 1960s, and a number of developments have contributed to that progress. Many players have come and gone in this space, and the advancements have been staggering. From GFLOPS to TFLOPS to PFLOPS, the processing speed of supercomputers continues to scale.
While early machines were simple and fast, providing long-term availability to data (storing, protecting and ensuring immediate access) was problematic and there was limited opportunity for reuse and further analysis. Fast forward to the 21st Century and this is still a challenge for those using traditional SAN, NAS and tape solutions. While there will always be a need for fast front-end storage for scratch space and modeling, HPC data results and data sets need to remain accessible for reuse and further analysis, and the underlying infrastructure needs to be able to support various methods of access—both traditional and cloud-enabled—while providing authentication and metering for reporting and chargebacks.
Working with organizations such as Texas Tech University, Argonne Labs, Johns Hopkins, and many others, we know the distinct benefits that the Caringo Swarm Object Storage brings to HPC. With each passing year, our engineering team at Caringo continues to add new features and functionality to our field-hardened product. Swarm has been vetted and tested for over a decade to the highest standards of data integrity and reliability and is used as a securely accessible asset library and storage service for a wide range of industries and use cases in government, telecommunications, education, corporate and media & entertainment organizations. (Check out this on-demand webinar to learn more about HPC use cases.)
Installing on any mix of standard storage hardware, Swarm provides a limitless and seamless pool of storage resources (essentially, an active archive) with asset protection, lifecycle management, search and security built in. And, it does this while delivering up to a 75% reduction in storage total cost of ownership (TCO) and bringing numerous hardware, operational and workflow efficiencies including:
- Industry-leading hardware and server utilization for your content. Use up to 95% of hard drive space for data.
- Built-in multi-tenancy with bandwidth and storage metering for chargebacks and reporting to enable secure data sharing and detailed usage views.
- Cross-platform collaboration and access enabled by Write/Read/Edit via HTTP, S3 or NFS interchangeably.
- Rapid asset retrieval and instant delivery via integrated search with the ability to add custom metadata.
This September 26, Caringo Product Manager Glen Olsen will be featured in our webinar: HPC Comes of Age: Handling Large Data Sets with Object Storage. Register now to watch live or on demand and learn how object storage can be used to manage massive pools of data and how Caringo’s multi-protocol ingest capabilities, elimination of storage silos and advanced search capabilities are enabling organizations in the 21st Century.
Visit us at SC17 in Denver (Booth 1001) this November. We will have live demos of our latest product advancements and object storage experts on hand to help you determine the best way to store, protect and manage your data. Need to set up an appointment or get a free expo pass? Email us at firstname.lastname@example.org.
From monitoring volcanoes and earthquakes to crop yield analysis, wildlife and insect migratory patterns, JASMIN is giving mankind unrivaled insight into our natural world. The JASMIN facility is a "super-data-cluster" that delivers infrastructure for data … More Details »
Abstract: This whitepaper presents the results from recent benchmarking of Caringo Swarm object storage on a multi-Terabit converged Ethernet Software-Defined Storage Super Data Cluster deployed by the UK Science and Technology Facilities Council’s (STFC) Scientific … More Details »