Using Nexsan’s high-density storage and nearly 1.7 million files of raw data collected from lasers, CalTech’s LIGO project labs were able to confirm Einstein’s theory of relatively.
When researchers at the LIGO (Laser Interferometer Gravitational-wave Observatory) realized that they were going to need petabyte-scale storage systems for their complex laser detectors, they opted for Nexsan Inc.’s high-capacity and hybrid flash arrays. Using these elite storage devices, the team was able to store and work with 6.4 petabytes of data, ultimately yielding the project’s success in proving Einstein’s general theory of relativity earlier this year.
The LIGO observatories were built and are operated by Caltech and MIT with the intention of allowing for direct observation of gravitational waves with cosmic origins. Part of this ambition was to eventually prove the highly influential theory of relativity that Einstein published in 1916.
Einstein’s general theory of relativity explains that what people perceive as the force of gravity actually arises from the curvature of space and time. Einstein further proposed that massive objects like stars and planets can change this geometry, creating curves in space.
In order to find direct evidence of this phenomenon, the LIGO project utilizes twin laser detectors that are based in Livingston, Louisiana and Hanford, Washington. Together, the lasers allow for large-scale physics experiments to be conducted. In Februrary, LIGO researchers were able to locate gravitational waves that indicated that two black holes had merged into one massive black hole. The discovery proved that gravitational waves produced ripples in space-time, confirming Einstein’s theory.
Along with twin lasers, the project necessitated a central data archive for storing the enormous volumes of data collected throughout the study. The data was comprised of billions and billions of files of raw instrument data as well as analytics processing information.
Before the LIGO project switched to Nexsan, it was operating with a 15-year-old storage infrastructure that involved a dozen Sun Microsystems and StorageTek tapes.
“We had been sitting with this (old storage) for a while,” explained senior staff scientist at Caltech Stuart Anderson. “It was old and it was running 200GB disk drives and it really needed to be updated. It was getting too old to support (our requirements). It had reached the end-of-life stage.”
Anderson and his research team tried out one Nexsan SATABeast, a block storage system built for performance and high-capacity, high-density storage. SATABeast arrays are often used by corporations and other organizations to backup data and for archiving and digital video surveillance.
Eventually the team ended up deploying six SATABoy systems and 20 Nexsan E60vt models from Nexsan’s E-series hybrid arrays. These “hybrids” use both hard disk drives and solid-state drives.
“We needed a new disk layer so we had to pick a new product,” Anderson continued. “We installed the SATABeast and integrated into the infrastructure and we tried to break it, but it just worked. Then we got SATABoy systems and eventually moved to the B Series E48 systems.”
According to Anderson, the LIGO project will continue to use its new Nexsan system for projects to come, including research involving colliding black holes and neutron stars.
“This was the most optimal solution that we found in terms of cost per performance, reliability, and support,” Anderson explained. “The solution met all our criteria.”
In a world where everyone from government organizations to major corporations seems to be struggling with massive troves of data and inadequate technology to store and use it, it’s somewhat amazing that a project dedicated to exploring space-time was able to set itself up with an adequate data storage system. That said, these scientists figured out how to prove gravitational waves exist; setting up a massive, easy-to-use data storage infrastructure was probably a relative walk in the park.