HPC Research Custer Get Red Hat OpenStack Private Cloud
Petabyte-scale eMedLab consortium opts for private cloud on Red Hat Linux OpenStack with hybrid Cinder and IBM Spectrum Scale storage, and rejects object and cloud storage
EMedLab, a partnership of seven research and academic institutions, has built a private cloud 5.5PB high-performance computing (HPC) cluster with Red Hat Enterprise Linux OpenStack using Cinder block storage and IBM’s Spectrum Scale (formerly GPFS) parallel file system. The organisation rejected use of object storage – an emerging choice for very large capacity research data use cases – and also rejected use of the public cloud because of concerns over control and security of data.
EMedLab built the HPC cluster – in conjunction with Sheffield-based HPC specialist – to provide compute resources to researchers working on genetic susceptibility to and appropriate treatments for cancers, cardio-vascular and rare diseases. Researchers can request and configure compute resources of up to 6,000 cores and storage for their projects via a web interface.The HPC cluster is hosted at the datacentre in Slough.
It comprises 252 Lenovo Flex System blades with 24 cores each and 500MB of RAM. These each run a KVM hypervisor upon which virtual machines are created for research projects, while private cloud management functions come from Red Hat Enterprise LinuxOpenStack Storage – to a total capacity of 5.5PB – is a combination of OpenStack Cinderblock storage and IBM Spectrum Scale. Physical storage capacity comes via Lenovo GSS24 and GSS26 SAS JBODs with 1.2PB on a faster scratch tier and 4.3PB of bulk capacity on larger drives...
- Tags:
- Anthony Adshead
- Cinder block storage
- cloud storage
- control
- data security
- data sharing
- datacentre
- eMedLab
- GPFS
- high-performance computing (HPC)
- HPC cluster IBM Spectrum Scale
- KVM hypervisor
- Lenovo Flex System blades
- object storage
- OpenStack
- Posix file system
- private cloud
- public cloud
- Red Hat
- Red Hat Enterprise Linux OpenStack
- virtual machines
- web interface
- Login to post comments