RoCE Technology for Data Transmission in HPC Networks

RDMA (Remote Direct Memory Access) enables direct data transfer between devices in a network, and RoCE (RDMA over Converged Ethernet) is a leading implementation of this technology. improves data transmission with high speed and low latency, making it ideal for high-performance computing and cloud environments.

Definition

As a type of RDMA, RoCE is a network protocol defined in the InfiniBand Trade Association (IBTA) standard, allowing RDMA over converged Ethernet network. Shortly, it can be regarded as the application of RDMA technology in hyper-converged data centers, cloud, storage, and virtualized environments. It possesses all the benefits of RDMA technology and the familiarity of Ethernet. If you want to understand RoCE in depth, you can read this article RDMA over Converged Ethernet Guide | FS Community.

Types

Generally, there are two RDMA over Converged Ethernet versions: RoCE v1 and RoCE v2. It depends on the network adapter or card used.

RoCE v1

Retaining the interface, transport layer, and network layer of InfiniBand (IB), the RoCE protocol substitutes the link layer and physical layer of IB with the link layer and network layer of Ethernet. In the link-layer data frame of a RoCE data packet, the Ethertype field value is specified by IEEE as 0x8915, unmistakably identifying it as a RoCE data packet. However, due to the RoCE protocol’s non-adoption of the Ethernet network layer, RoCE data packets lack an IP field. Consequently, routing at the network layer is unfeasible for RoCE data packets, restricting their transmission to routing within a Layer 2 network.

ROCE v2

Introducing substantial enhancements, the RoCE v2 protocol builds upon the RoCE protocol’s foundation. RoCEv2 transforms the InfiniBand (IB) network layer utilized by the RoCE protocol by incorporating the Ethernet network layer and a transport layer employing the UDP protocol. It harnesses the DSCP and ECN fields within the IP datagram of the Ethernet network layer for implementing congestion control. This enables RoCE v2 protocol packets to undergo routing, ensuring superior scalability. As RoCEv2 fully supersedes the original RoCE protocol, references to the RoCE protocol generally denote the RoCE v2 protocol, unless explicitly specified as the first generation of RoCE.

Also Check- An In-Depth Guide to RoCE v2 Network | FS Community

InfiniBand vs. RoCE

In comparison to InfiniBand, RoCE presents the advantages of increased versatility and relatively lower costs. It not only serves to construct high-performance RDMA networks but also finds utility in traditional Ethernet networks. However, configuring parameters such as Headroom, PFC (Priority-based Flow Control), and ECN (Explicit Congestion Notification) on switches can pose complexity. In extensive deployments, especially those featuring numerous network cards, the overall throughput performance of RoCE networks may exhibit a slight decrease compared to InfiniBand networks.

In actual business scenarios, there are major differences between the two in terms of business performance, scale, operation and maintenance. For detailed comparison, please refer to this article InfiniBand vs. RoCE: How to choose a network for AI data center from FS community.

Benefits

RDMA over Converged Ethernet ensures low-latency and high-performance data transmission by providing direct memory access through the network interface. This technology minimizes CPU involvement, optimizing bandwidth and scalability as it enables access to remote switch or server memory without consuming CPU cycles. The zero-copy feature facilitates efficient data transfer to and from remote buffers, contributing to improved latency and throughput with RoCE. Notably, RoCE eliminates the need for new equipment or Ethernet infrastructure replacement, leading to substantial cost savings for companies dealing with massive data volumes.

How FS Can Help

In the fast-evolving landscape of AI data center networks, selecting the right solution is paramount. Drawing on a skilled technical team and vast experience in diverse application scenarios, FS utilizes RoCE to tackle the formidable challenges encountered in High-Performance Computing (HPC). FS offers a range of products, including NVIDIA® InfiniBand Switches, 100G/200G/400G/800G InfiniBand transceivers and NVIDIA® InfiniBand Adapters, establishing itself as a professional provider of communication and high-speed network system solutions for networks, data centers, and telecom clients. Take action now – register for more information and experience our products through a Free Product Trial.

Revolutionize High-Performance Computing with RDMA

To address the efficiency challenges of rapidly growing data storage and retrieval within data centers, the use of Ethernet-converged distributed storage networks is becoming increasingly popular. However, in storage networks where data flows are mainly characterized by large flows, packet loss caused by congestion will reduce data transmission efficiency and aggravate congestion. In order to solve this series of problems, RDMA technology emerged as the times require.

What is RDMA?

RDMA (Remote Direct Memory Access) is an advanced technology designed to reduce the latency associated with server-side data processing during network transfers. Allowing user-level applications to directly read from and write to remote memory without involving the CPU in multiple memory copies, RDMA bypasses the kernel and writes data directly to the network card. This achieves high throughput, ultra-low latency, and minimal CPU overhead. Presently, RDMA’s transport protocol over Ethernet is RoCEv2 (RDMA over Converged Ethernet v2). RoCEv2, a connectionless protocol based on UDP (User Datagram Protocol), is faster and consumes fewer CPU resources compared to the connection-oriented TCP (Transmission Control Protocol).

Building Lossless Network with RDMA

The RDMA networks achieve lossless transmission through the deployment of PFC and ECN functionalities. PFC technology controls RDMA-specific queue traffic on the link, applying backpressure to upstream devices during congestion at the switch’s ingress port. With ECN technology, end-to-end congestion control is achieved by marking packets during congestion at the egress port, prompting the sending end to reduce its transmission rate.

Optimal network performance is achieved by adjusting buffer thresholds for ECN and PFC, ensuring faster triggering of ECN than PFC. This allows the network to maintain full-speed data forwarding while actively reducing the server’s transmission rate to address congestion.

Accelerating Cluster Performance with GPU Direct-RDMA

The traditional TCP network heavily relies on CPU processing for packet management, often struggling to fully utilize available bandwidth. Therefore, in AI environments, RDMA has become an indispensable network transfer technology, particularly during large-scale cluster training. It surpasses high-performance network transfers in user space data stored in CPU memory and contributes to GPU transfers within GPU clusters across multiple servers. And the Direct-RDMA technology is a key component in optimizing HPC/AI performance, and NVIDIA enhances the performance of GPU clusters by supporting the function of GPU Direct-RDMA.

Streamlining RDMA Product Selection

In building high-performance RDMA networks, essential elements like RDMA adapters and powerful servers are necessary, but success also hinges on critical components such as high-speed optical modules, switches, and optical cables. As a leading provider of high-speed data transmission solutions, FS offers a diverse range of top-quality products, including high-performance switches, 200/400/800G optical modules, smart network cards, and more. These are precisely designed to meet the stringent requirements of low-latency and high-speed data transmission.