Enhancing Data Center Networks with InfiniBand Solutions

With the rapid growth of data centres driven by expansive models, cloud computing, and big data analytics, there is an increasing demand for high-speed data transfer and low-latency communication. In this complex network ecosystem, InfiniBand (IB) technology has become a market leader, playing a vital role in addressing the challenges posed by the training and deployment of expansive models. Constructing high-speed networks within data centres requires essential components such as high-rate network cards, optical modules, switches, and advanced network interconnect technologies.

NVIDIA Quantum™-2 InfiniBand Switch

When selecting switches, NVIDIA’s QM9700 and QM9790 series stand out as the most advanced devices. Built on NVIDIA Quantum-2 architecture, they offer 64 NDR 400Gb/s InfiniBand ports within a standard 1U chassis. This breakthrough translates to an individual switch providing a total bidirectional bandwidth of 51.2 terabits per second (Tb/s), along with an unprecedented handling capacity exceeding 66.5 billion packets per second (BPPS).

The NVIDIA Quantum-2 InfiniBand switches extend beyond their NDR high-speed data transfer capabilities, incorporating extensive throughput, on-chip compute processing, advanced intelligent acceleration features, adaptability, and sturdy construction. These attributes establish them as the quintessential selections for sectors involving high-performance computing (HPC), artificial intelligence, and expansive cloud-based infrastructures.

Additionally, the integration of NDR switches helps minimise overall cost and complexity, thereby promoting the development of data centre network technology.

Also Check- Revolutionizing Data Center Networks: 800G Optical Modules and NDR Switches | FS Community

ConnectX®-7 InfiniBand Card

The NVIDIA ConnectX®-7 InfiniBand network card (HCA) ASIC delivers a staggering data throughput of 400Gb/s, supporting 16 lanes of PCIe 5.0 or PCIe 4.0 host interface. Utilising advanced SerDes technology with 100Gb/s per lane, the 400Gb/s InfiniBand is achieved through OSFP connectors on both the switch and HCA ports. The OSFP connector on the switch supports two 400Gb/s InfiniBand ports or 200Gb/s InfiniBand ports, while the network card HCA features one 400Gb/s InfiniBand port. The product range includes active and passive copper cables, transceivers, and MPO fibre cables. Notably, despite both using OSFP packaging, there are differences in physical dimensions, with the switch-side OSFP module equipped with heat fins for cooling.

OSFP 800G Optical Transceiver

The OSFP-800G SR8 Module is specifically crafted for utilization in 800Gb/s 2xNDR InfiniBand systems, offering throughput up to 30m over OM3 or 50m over OM4 multimode fibre (MMF) by utilising a wavelength of 850nm through dual MTP/MPO-12 connectors. Its dual-port configuration represents a significant advancement incorporating two internal transceiver engines, fully unlocking the switch’s potential.

This design allows the 32 physical interfaces to support up to 64 400G NDR interfaces. With its high-density and high-bandwidth design, this module enables data centres to seamlessly meet the escalating network demands of applications such as high-performance computing, artificial intelligence, and cloud infrastructure.

Furthermore, the FS OSFP-800G SR8 Module provides outstanding performance and reliability, delivering robust optical interconnection choices for data centres. This module enables data centres to utilise the complete performance potential of the QM9700/9790 series switch, facilitating high-bandwidth and low-latency data transmission.

NDR Optical Connection Solution

Addressing the NDR optical connection challenge, the NDR switch ports utilize OSFP with eight channels per interface, each employing 100Gb/s SerDes. This allows for three mainstream connection speed options: 800G to 800G, 800G to 2X400G, and 800G to 4X200G. Additionally, each channel supports a downgrade from 100Gb/s to 50Gb/s, facilitating interoperability with previous-generation HDR devices.

The 400G NDR series cables and transceivers offer diverse product choices for configuring network switch and adapter systems, focusing on data centre lengths of up to 500 meters to accelerate AI computing systems. The various connector types, including passive copper cables (DAC), active optical cables (AOC), and optical modules with jumpers, cater to different transmission distances and bandwidth requirements, ensuring low latency and an extremely low bit error rate for high-bandwidth AI and accelerated computing applications.

Please see the article for deployment details Infiniband NDR OSFP Solution from the FS community.

Conclusion

In summary, InfiniBand (IB) technology offers unparalleled throughput, intelligent acceleration, and robust performance for AI, HPC, and cloud infrastructures. FS OSFP-800G SR8 Module and NDR Optical Connection Solution further enhance data centre capabilities, enabling high-bandwidth, low-latency connectivity essential for modern computing applications.

Explore the full range of advanced networking solutions at FS.com and revolutionize your data centre network today!

InfiniBand: Powering High-Performance Data Centers

Driven by the booming development of cloud computing and big data, InfiniBand has become a key technology and plays a vital role at the core of the data center. But what exactly is InfiniBand technology? What attributes contribute to its widespread adoption? The following guide will answer your questions.

What is InfiniBand?

InfiniBand is an open industrial standard that defines a high-speed network for interconnecting servers, storage devices, and more. It leverages point-to-point bidirectional links to enable seamless communication between processors located on different servers. It is compatible with various operating systems such as Linux, Windows, and ESXi.

InfiniBand Network Fabric

InfiniBand, built on a channel-based fabric, comprises key components like HCA (Host Channel Adapter), TCA (Target Channel Adapter), InfiniBand links (connecting channels, ranging from cables to fibers, and even on-board links), and InfiniBand switches and routers (integral for networking). Channel adapters, particularly HCA and TCA, are pivotal in forming InfiniBand channels, ensuring security and adherence to Quality of Service (QoS) levels for transmissions.

InfiniBand vs Ethernet

InfiniBand was developed to address data transmission bottlenecks in high-performance computing clusters. The primary differences with Ethernet lie in bandwidth, latency, network reliability, and more.

High Bandwidth and Low Latency

InfiniBand provides higher bandwidth and lower latency, meeting the performance demands of large-scale data transfer and real-time communication applications.

RDMA Support

InfiniBand supports Remote Direct Memory Access (RDMA), enabling direct data transfer between node memories. This reduces CPU overhead and improves transfer efficiency.

Scalability

InfiniBand Fabric allows for easy scalability by connecting a large number of nodes and supporting high-density server layouts. Additional InfiniBand switches and cables can expand network scale and bandwidth capacity.

High Reliability

InfiniBand Fabric incorporates redundant designs and fault isolation mechanisms, enhancing network availability and fault tolerance. Alternate paths maintain network connectivity in case of node or connection failures.

Conclusion

The InfiniBand network has undergone rapid iterations, progressing from SDR 10Gbps, DDR 20Gbps, QDR 40Gbps, FDR56Gbps, EDR 100Gbps, and now to HDR 200Gbps and NDR 400Gbps/800Gbps InfiniBand. For those considering the implementation of InfiniBand products in their high-performance data centers, further details are available from FS.com.