PetaFlops for Artificial Intelligence
Artificial Intelligence requires significant compute power.
In Machine Learning (ML) and Deep learning (DL) environments new GPU-accelerated technology is accelerating every part of workflow providing significantly shorted time to insight.
Current generation GPUs feature CUDA processing cores and AI specific Tensor cores to accelerate both training and insight.
Nvidia quote an average of 100X performance increase on training when compared to CPU-only compute systems.
A significant reduction in training time massively increases data science productivity and shortens time to insight.
Nvidia also quote an average of 35X performance increase on inference when compared to CPU-only systems.
Imagine taking only 3% of the time you spend now to inference. The increased speed means more iterations are possible and accuracy is improved.
We recommend Nvidia GPU-accelerated AI solutions.
Nvidia provide the GPU and support the software stack by providing ready-to-go AI containers to kickstart your project.
In most situations this means much shorter implementation times and far shorter times to first insights.
All of the major hardware vendors provides hardware designed for AI supporting multiple high- performance Compute Accelerators or GPUs.
Petaflops provide the GPU-accelerated hardware for you to perform your analytics, DL and ML workloads.
We are able to pre-install Linux, Docker (or your preferred product), NGC Containers prior to shipment.
Ask us about vComputeServer. We are able to install this on a VMware ESXi virtualisation platform to dynamically allocate your GPU resources between virtual machines.
Call us on 1300 00 8100 to discuss your AI requirement. We have solutions starting at less than $75000.
Data Scientist Workstations
Data Scientist workstations are under-the-desk high performance systems designed to reduce data cleansing, training and inference times. This increases the data scientist’s productivity often by a factor of 10X.
We have Data Science Workstations from HPe and Nvidia.
These are high performance systems with GOLD or platinum GPUs, 256GB or more RAM and performance Nvidia GPUs.
Shared AI Servers
Shared AI servers run traditionally run multiple docker containers under Linux.
With Nvidia Virtualised GPU technology and vComputeServer it is possible to run multiple virtual machines each with multiple containers with flexible allocation of GPU and other resources between these virtual machines.
We pre-install the AI software stack using products from VMWare (ESXi) and Nvidia (NGC).
We have Shared AI High Performance Computer servers from:
We also have AI-specific appliances (includes software stack and hardware) from Gigabyte with leading-edge technology.
Gigabyte offer a number of DNN Appliances specifically for AI workloads.
These appliances have the options of Nvidia NVLINK and PCIE4 bus as well as support for the AMD EPYC 64 core processors.