Artificial intelligence in the data centre took a leap forward last week with the announcement by GPU vendor NVIDIA of a powerful machine learning server platform, the NVIDIA DGX A100.
The company’s founder and CEO Jensen Huang used his keynote slot at GTC 2020 to talk about the new product and the latest version of NVIDIA’s Ampere GPU architecture. He called the DGX A100 ‘the world’s most advanced system for all AI workloads’, and claimed that a rack of five DGX A100 systems can replace a whole AI training and inference-focused data centre, at a tenth of the cost and using a twentieth of the power.
The company’s new Ampere A100 GPU is made up of around 54 billion transistors, and features and is can be scaled up with NVLink and NVSwitch high-speed interconnect technologies.
With the announcement, there was a foretaste of the ways in which NVIDIA will be benefitting from its newly rubber-stamped acquisition of Mellanox. Inside the DGX A100 are eight single-port Mellanox Connect-X 6 cards for scale-out clustering and a dual-port ConnectX-6 VPI InfiniBand HDR/200 GigE adapter for data/storage networking.
Also announced last week was the new Mellanox ConnectX-6 Lx SmartNIC, a highly secure and efficient 25/50 gigabit per second (Gb/s) product, designed to meet surging growth in enterprise and cloud scale-out workloads.
For more info, see below: