PetaFlops for Analytics and Big Data
The massive growth in the volume of data being collected by organisations has created massive multi-node systems for processing and storing this data.
Costs for both cloud-based systems and private clouds are growing exponentially as the volume of data grows. Costs for moving this data between clouds have also grown exponentially.
Most of the largest organisations have already migrated to a Hybrid cloud solution where public cloud and private cloud (compute and storage located in the organisation’s own datacentre) coexist.
Many are considering moving a large percentage of workloads to their own private cloud because of cost, data security and performance.
GPU-enabled compute reduces the physical footprint of the compute nodes by massively consolidating these compute servers into one or more powerful compute systems.
A high-end high-performance computer can replace 100 CPU-powered compute nodes. This reduces space requirements and also cuts power and other operating infrastructure costs dramatically.
Highest performance systems are built from the ground up for multiple GPUs. Low-power consumption GPUs may be retrofitted into existing industry-standard servers to achieve moderate performance.
HPC Compute servers for Analytics
GPUs for retrofitting in existing servers
Storage servers for Analytics
High Density storage is also required for Big Data. The major vendors provide systems with many petabytes available across multiple nodes with full redundancy.
For smaller requirements there are 30 and 60 drive systems capable of storing 0.5-1.0PB (500TB-1000TB).