Enhancing Data Center Networks with InfiniBand Solutions

With the rapid growth of data centres driven by expansive models, cloud computing, and big data analytics, there is an increasing demand for high-speed data transfer and low-latency communication. In this complex network ecosystem, InfiniBand (IB) technology has become a market leader, playing a vital role in addressing the challenges posed by the training and deployment of expansive models. Constructing high-speed networks within data centres requires essential components such as high-rate network cards, optical modules, switches, and advanced network interconnect technologies.

NVIDIA Quantum™-2 InfiniBand Switch

When selecting switches, NVIDIA’s QM9700 and QM9790 series stand out as the most advanced devices. Built on NVIDIA Quantum-2 architecture, they offer 64 NDR 400Gb/s InfiniBand ports within a standard 1U chassis. This breakthrough translates to an individual switch providing a total bidirectional bandwidth of 51.2 terabits per second (Tb/s), along with an unprecedented handling capacity exceeding 66.5 billion packets per second (BPPS).

The NVIDIA Quantum-2 InfiniBand switches extend beyond their NDR high-speed data transfer capabilities, incorporating extensive throughput, on-chip compute processing, advanced intelligent acceleration features, adaptability, and sturdy construction. These attributes establish them as the quintessential selections for sectors involving high-performance computing (HPC) and expansive cloud-based infrastructures.

Additionally, the integration of NDR switches helps minimise overall cost and complexity, thereby promoting the development of data centre network technology.

Also Check- Revolutionizing Data Center Networks: 800G Optical Modules and NDR Switches | FS Community

ConnectX®-7 InfiniBand Card

The NVIDIA ConnectX®-7 InfiniBand network card (HCA) ASIC delivers a staggering data throughput of 400Gb/s, supporting 16 lanes of PCIe 5.0 or PCIe 4.0 host interface. Utilising advanced SerDes technology with 100Gb/s per lane, the 400Gb/s InfiniBand is achieved through OSFP connectors on both the switch and HCA ports. The OSFP connector on the switch supports two 400Gb/s InfiniBand ports or 200Gb/s InfiniBand ports, while the network card HCA features one 400Gb/s InfiniBand port. The product range includes active and passive copper cables, transceivers, and MPO fibre cables. Notably, despite both using OSFP packaging, there are differences in physical dimensions, with the switch-side OSFP module equipped with heat fins for cooling.

OSFP 800G Optical Transceiver

The OSFP-800G SR8 Module is specifically crafted for utilization in 800Gb/s 2xNDR InfiniBand systems, offering throughput up to 30m over OM3 or 50m over OM4 multimode fibre (MMF) by utilising a wavelength of 850nm through dual MTP/MPO-12 connectors. Its dual-port configuration represents a significant advancement incorporating two internal transceiver engines, fully unlocking the switch’s potential.

This design allows the 32 physical interfaces to support up to 64 400G NDR interfaces. With its high-density and high-bandwidth design, this module enables data centres to seamlessly meet the escalating network demands of applications such as high-performance computing and cloud infrastructure.

Furthermore, the FS OSFP-800G SR8 Module provides outstanding performance and reliability, delivering robust optical interconnection choices for data centres. This module enables data centres to utilise the complete performance potential of the QM9700/9790 series switch, facilitating high-bandwidth and low-latency data transmission.

NDR Optical Connection Solution

Addressing the NDR optical connection challenge, the NDR switch ports utilize OSFP with eight channels per interface, each employing 100Gb/s SerDes. This allows for three mainstream connection speed options: 800G to 800G, 800G to 2X400G, and 800G to 4X200G. Additionally, each channel supports a downgrade from 100Gb/s to 50Gb/s, facilitating interoperability with previous-generation HDR devices.

The 400G NDR series cables and transceivers offer diverse product choices for configuring network switch and adapter systems, focusing on data centre lengths of up to 500 meters to accelerate HPC computing systems. The various connector types, including passive copper cables (DAC), active optical cables (AOC), and optical modules with jumpers, cater to different transmission distances and bandwidth requirements, ensuring low latency and an extremely low bit error rate for high-bandwidth HPC and accelerated computing applications.

Please see the article for deployment details Infiniband NDR OSFP Solution from the FS community.

Conclusion

In summary, InfiniBand (IB) technology offers unparalleled throughput, intelligent acceleration, and robust performance for HPC, and cloud infrastructures. FS OSFP-800G SR8 Module and NDR Optical Connection Solution further enhance data centre capabilities, enabling high-bandwidth, low-latency connectivity essential for modern computing applications.

Explore the full range of advanced networking solutions at FS.com and revolutionize your data centre network today!