Austin, TX, November 8, 2018 — Caringo, Inc. announced today that Swarm Object Storage has been deployed by the JASMIN facility, a “super-data-cluster” that delivers infrastructure for data analysis funded by the Natural Environment Research Council (NERC), the UK Space Agency (UKSA), educational institutions and private industry. JASMIN is managed jointly by the UK’s Science and Technology Facilities Council (STFC) Scientific Computing Department (SCD) and the Centre for Environmental Data Analysis (CEDA), which is part of STFC’s RAL Space research and technology development facility.
Due to increasing capacity needs, a growing researcher base and a shift in access from traditional file protocols to RESTful interfaces, the infrastructure managers at STFC started looking for methods to streamline infrastructure management, tenant management, access controls, and private and public file sharing. Their research led them to object storage.
The object storage solution selected needed to achieve read performance characteristics similar to existing parallel file systems. In the benchmark testing, Caringo Swarm delivered an astounding 35 GB/s read and 12.5 GB/s write aggregate S3 throughput. Additionally, NFS throughput benchmarks via a single instance of SwarmNFS delivered 1.6 GB/s sustained streaming (3PB+ per month) with no caching or spooling.
“The performance figures achieved are the result of Swarm’s unique parallel architecture that’s free of the bottlenecks inherent in other object storage solutions on the market,” said Tony Barbagallo, Caringo CEO. “These results are similar to what parallel file systems achieved in the same environment and we have a solid roadmap to continued performance improvements in future Swarm releases.”
The infrastructure for JASMIN is designed, deployed and managed by the SCD team at STFC Rutherford Appleton Laboratory. STFC is a world-leading, multi-disciplinary science organization with the goal to deliver economic, societal, scientific and international benefits to the UK and its people—and more broadly, to the world. The first phase of JASMIN was funded in 2011 and deployed in 2012. Since then, hundreds of Petabytes of data have been processed and thousands of researchers have accessed the platform.
Mr. Barbagallo and the Caringo team will be at SC18, Booth 4035 in Dallas, Texas, November 12–15, 2018. They will have experienced Engineers as well as Storage Architects to answer questions for attendees and to demonstrate Caringo’s latest product advancements for the HPC industry. SC18 attendees are invited to join the Caringo team and Boston Limited at their widely anticipated joint Happy Hour at 2 pm, Tuesday and Wednesday, in the Caringo booth (#4035). If you would like to set up a custom demo or appointment, email Caringo at firstname.lastname@example.org.
Visit https://www.caringo.com/solutions/hpc/ for more information on Caringo’s high-performance computing solutions. Use case covering object storage benefits and whitepaper detailing performance now available on Caringo.com.
Caringo was founded in 2005 to change the economics of storage by designing software from the ground up to solve the issues associated with relentless data growth. Caringo’s flagship product, Swarm, decouples data from applications and hardware providing a foundation for continued data access and analysis that continuously evolves while guaranteeing data integrity. Today, Caringo software-defined object storage solutions are used to preserve and provide access to rapidly scaling data sets across many industries by organizations such as NEP, Science and Technology Facilities Council (STFC), Argonne National Labs, Texas Tech University, Department of Defense, the Brazilian Federal Court System, British Telecom and hundreds more worldwide.
The size of HPC data sets is exploding. With Swarm Object Storage, manage thousands of tenants and billions of files while simplifying access More Details »
With Caringo Swarm & Marquis, you can optimize media & metadata workflows, offload Avid shared storage and simplify Avid backup and DR. More Details »