NVIDIA has introduced the next generation of its Mellanox 400G InfiniBand, giving AI developers and researchers the fastest possible networking performance and enabling them to take on the world’s most challenging problems.
With computing requirements expanding in areas such as drug discovery, climate research and genomics, NVIDIA Mellanox 400G InfiniBand offers a leap in performance, claimed NVIDIA at the SC20 launch.
The seventh generation of Mellanox InfiniBand delivers ultra-low latency and doubles data throughput with NDR 400Gb/s as well as adding new NVIDIA In-Network Computing engines to provide additional acceleration.
Infrastructure manufacturers including Atos, Dell Technologies, Fujitsu, Inspur, Lenovo and Supermicro have already indicated that they plan to integrate NVIDIA Mellanox 400G InfiniBand into their enterprise solutions and HPC offerings.
“The most important work of our customers is based on AI and increasingly complex applications that demand faster, smarter, more scalable networks,” said Gilad Shainer, senior vice president of networking at NVIDIA. “The NVIDIA Mellanox 400G InfiniBand’s massive throughput and smart acceleration engines let HPC, AI and hyperscale cloud infrastructures achieve unmatched performance with less cost and complexity.”
NVIDIA has additionally announced the NVIDIA DGX Station A100, the world’s only petascale workgroup server. The second generation of the company’s AI system, DGX Station A100 accelerates demanding machine learning and data science workloads for teams working in corporate offices, research facilities, labs or home offices.
Delivering 2.5 petaflops of AI performance, DGX Station A100 is the only workgroup server with four of the latest NVIDIA A100 Tensor Core GPUs fully interconnected with NVIDIA NVLink, providing up to 320GB of GPU memory to speed breakthroughs in enterprise data science and AI, said NVIDIA.
DGX Station A100 is also the only workgroup server that supports NVIDIA Multi-Instance GPU (MIG) technology. With MIG, a single DGX Station A100 provides up to 28 separate GPU instances to run parallel jobs and support multiple users without impacting system performance.
“DGX Station A100 brings AI out of the data centre with a server-class system that can plug in anywhere,” said Charlie Boyle, vice president and general manager of DGX systems at NVIDIA. “Teams of data science and AI researchers can accelerate their work using the same software stack as NVIDIA DGX A100 systems, enabling them to easily scale from development to deployment.”