Nvidia releases the Ampere GPU Powered DGX A100 Supercomputing System!

ungefär 1 år sedan

NVIDIA CEO Jensen Huang announced on 14 May 2020 the new Ampere GPU powered DGX A100. This monstrous GPU system will certainly rock the HPC market, with it’s whopping 5 PETAFLOPS (peak performance at FP16). It’s the follow-up model of the VOLTA powered DGX-1 (8 GPUs) and DGX-2 (16 GPUs) products.

The DGX A100 contains 8 next generation GA100 GPUs of 40GB memory each. The system contains many disruptive improvements:

  • A total of 320GB of GPU memory and 5 PETAFLOPS of performance for increased workloads!
  • The individual GPUs have 2x to 3x speed improvements in comparison with V100 depending on the specific workload, so the DGX A100 is across the board faster than DGX-2. In some specific cases the peak performance of the DGX A100 can be ten-fold, due to much more versatile hardware options (supported mixed precision settings)!
  • HBM memory speeds of up to 1.5 TB a second
  • 6.5 kWatt peak power usage (better performance per watt than DGX-2)
  • 6U rack height
  • A lot of internal IO bandwidth improvements, to leverage the increased compute capacity of the Ampere GPU.
  • FP32 Tensor core capability.
  • Tensor core support for double precision FP64.
  • Ampere contains int8 capabilities: Ampere is a more general engine for all AI workloads (also inference)
  • Memory bandwidth speeds of 1.5 TB per second
  • Doubling of NVlink bandwidth in comparison with older generation DGX-1/2
  • Better utilization of the GPUs: 1 GPU can be hardware partionned in up to 7 instances (real hardware IO partitioning, no time sharing of the GPU!). So you could configure the DGX A100 into 56 lightning fast physical GPUs with each 5.7 GB of Ram!
  • Internal Nvme Storage upgradeable to 30 TB.
  • Introduction of float16 as a format for deep learning workloads!

dgx-a100-2.png

 

Conclusion: the DGX A100 is a game changer with essential new improvements, like GPU hardware sharing and int8 support. It is thus a more general powerhouse, which can be used in a lot of different deep learning challenges, not only for training, but also for large scale inference (for configured in a network with many instances). Watch NVIDIA’s CEO’s presentation on the DGX-A1OO or read about the specs in this pdf datasheet.

CGit is NVIDIA’s only 100% Swedish Elite partner, certified to install DGX A100 in your, (cooled) datacenter. Place your order with us today, and the DGX A100 will be shipped to you next month.