With the rapid growth of data centers driven by expansive models, cloud computing, and big data analytics, there is an increasing demand for high-speed data transfer and low-latency communication. In this complex network ecosystem, InfiniBand (IB) technology has become a market leader, playing a vital role in addressing the challenges posed by the training and deployment of expansive models. Constructing high-speed networks within data centers requires essential components such as high-rate network cards, optical modules, switches, and advanced network interconnect technologies.
NVIDIA Quantum™-2 InfiniBand Switch
When selecting switches, NVIDIA’s QM9700 and QM9790 series stand out as the most advanced devices. Built on NVIDIA Quantum-2 architecture, they offer 64 NDR 400Gb/s InfiniBand ports within a standard 1U chassis. This breakthrough translates to an individual switch providing a total bidirectional bandwidth of 51.2 terabits per second (Tb/s), along with an unprecedented handling capacity exceeding 66.5 billion packets per second (BPPS).
The NVIDIA Quantum-2 InfiniBand switches extend beyond their NDR high-speed data transfer capabilities, incorporating extensive throughput, on-chip compute processing, advanced intelligent acceleration features, adaptability, and sturdy construction. These attributes establish them as the quintessential selections for sectors involving high-performance computing (HPC), artificial intelligence, and expansive cloud-based infrastructures. Additionally, the integration of NDR switches helps minimize overall expenses and complexity, propelling the progression and evolution of data center network technologies.
It can be said that NVIDIA Quantum-2 InfiniBand switches not only feature high-speed NDR data transfer capabilities but also integrate extensive throughput, on-chip compute processing, advanced intelligent acceleration features, and robust structure. These attributes make them a typical choice in the realm of High-Performance Computing (HPC), Artificial Intelligence, and a wide range of cloud-based infrastructure applications. Moreover, the integration of NDR switches helps minimize overall cost and complexity, thereby promoting the development of data center network technology.
The NVIDIA ConnectX®-7 InfiniBand network card (HCA) ASIC delivers a staggering data throughput of 400Gb/s, supporting 16 lanes of PCIe 5.0 or PCIe 4.0 host interface. Utilizing advanced SerDes technology with 100Gb/s per lane, the 400Gb/s InfiniBand is achieved through OSFP connectors on both the switch and HCA ports. The OSFP connector on the switch supports two 400Gb/s InfiniBand ports or 200Gb/s InfiniBand ports, while the network card HCA features one 400Gb/s InfiniBand port. The product range includes active and passive copper cables, transceivers, and MPO fiber cables. Notably, despite both using OSFP packaging, there are differences in physical dimensions, with the switch-side OSFP module equipped with heat fins for cooling.
OSFP 800G Optical Transceiver
The OSFP-800G SR8 Module is designed for use in 800Gb/s 2xNDR InfiniBand systems throughput up to 30m over OM3 or 50m over OM4 multimode fiber (MMF) using a wavelength of 850nm via dual MTP/MPO-12 connectors. The dual-port design is a key innovation that incorporates two internal transceiver engines, fully unleashing the potential of the switch. This allows the 32 physical interfaces to provide up to 64 400G NDR interfaces. This high-density and higgh-bandwidth design enables data centers to meet the growing network demands and requirements of applications such as high-performance computing artificial intelligence, and cloud infrastructure.
FS’s OSFP-800G SR8 Module offers superior performance and dependability, offering strong optical interconnection options for data centers. This module empowers data centers to harness the full performance capabilities of the QM9700/9790 series switch, supporting the transmission of data with both high bandwidth and low latency.
NDR Optical Connection Solution
Addressing the NDR optical connection challenge, the NDR switch ports utilize OSFP with eight channels per interface, each employing 100Gb/s SerDes. This allows for three mainstream connection speed options: 800G to 800G, 800G to 2X400G, and 800G to 4X200G. Additionally, each channel supports downgrade from 100Gb/s to 50Gb/s, facilitating interoperability with previous-generation HDR devices. The 400G NDR series cables and transceivers offer diverse product choices for configuring network switch and adapter systems, focusing on data center lengths of up to 500 meters to accelerate AI computing systems. The various connector types, including passive copper cables (DAC), active optical cables (AOC), and optical modules with jumpers, cater to different transmission distances and bandwidth requirements, ensuring low latency and an extremely low bit error rate for high-bandwidth AI and accelerated computing applications. Please see the article Infiniband NDR OSFP Solution for deployment details from FS community.
RDMA (Remote Direct Memory Access) enables direct data transfer between devices in a network, and RoCE (RDMA over Converged Ethernet) is a leading implementation of this technology. improves data transmission with high speed and low latency, making it ideal for high-performance computing and cloud environments.
As a type of RDMA, RoCE is a network protocol defined in the InfiniBand Trade Association (IBTA) standard, allowing RDMA over converged Ethernet network. Shortly, it can be regarded as the application of RDMA technology in hyper-converged data centers, cloud, storage, and virtualized environments. It possesses all the benefits of RDMA technology and the familiarity of Ethernet. If you want to understand RoCE in depth, you can read this article RDMA over Converged Ethernet Guide | FS Community.
Generally, there are two RDMA over Converged Ethernet versions: RoCE v1 and RoCE v2. It depends on the network adapter or card used.
Retaining the interface, transport layer, and network layer of InfiniBand (IB), the RoCE protocol substitutes the link layer and physical layer of IB with the link layer and network layer of Ethernet. In the link-layer data frame of a RoCE data packet, the Ethertype field value is specified by IEEE as 0x8915, unmistakably identifying it as a RoCE data packet. However, due to the RoCE protocol’s non-adoption of the Ethernet network layer, RoCE data packets lack an IP field. Consequently, routing at the network layer is unfeasible for RoCE data packets, restricting their transmission to routing within a Layer 2 network.
Introducing substantial enhancements, the RoCE v2 protocol builds upon the RoCE protocol’s foundation. RoCEv2 transforms the InfiniBand (IB) network layer utilized by the RoCE protocol by incorporating the Ethernet network layer and a transport layer employing the UDP protocol. It harnesses the DSCP and ECN fields within the IP datagram of the Ethernet network layer for implementing congestion control. This enables RoCE v2 protocol packets to undergo routing, ensuring superior scalability. As RoCEv2 fully supersedes the original RoCE protocol, references to the RoCE protocol generally denote the RoCE v2 protocol, unless explicitly specified as the first generation of RoCE.
In comparison to InfiniBand, RoCE presents the advantages of increased versatility and relatively lower costs. It not only serves to construct high-performance RDMA networks but also finds utility in traditional Ethernet networks. However, configuring parameters such as Headroom, PFC (Priority-based Flow Control), and ECN (Explicit Congestion Notification) on switches can pose complexity. In extensive deployments, especially those featuring numerous network cards, the overall throughput performance of RoCE networks may exhibit a slight decrease compared to InfiniBand networks.
RDMA over Converged Ethernet ensures low-latency and high-performance data transmission by providing direct memory access through the network interface. This technology minimizes CPU involvement, optimizing bandwidth and scalability as it enables access to remote switch or server memory without consuming CPU cycles. The zero-copy feature facilitates efficient data transfer to and from remote buffers, contributing to improved latency and throughput with RoCE. Notably, RoCE eliminates the need for new equipment or Ethernet infrastructure replacement, leading to substantial cost savings for companies dealing with massive data volumes.
How FS Can Help
In the fast-evolving landscape of AI data center networks, selecting the right solution is paramount. Drawing on a skilled technical team and vast experience in diverse application scenarios, FS utilizes RoCE to tackle the formidable challenges encountered in High-Performance Computing (HPC). FS offers a range of products, including NVIDIA® InfiniBand Switches, 100G/200G/400G/800G InfiniBand transceivers and NVIDIA® InfiniBand Adapters, establishing itself as a professional provider of communication and high-speed network system solutions for networks, data centers, and telecom clients. Take action now – register for more information and experience our products through a Free Product Trial.
To address the efficiency challenges of rapidly growing data storage and retrieval within data centers, the use of Ethernet-converged distributed storage networks is becoming increasingly popular. However, in storage networks where data flows are mainly characterized by large flows, packet loss caused by congestion will reduce data transmission efficiency and aggravate congestion. In order to solve this series of problems, RDMA technology emerged as the times require.
What is RDMA?
RDMA (Remote Direct Memory Access) is an advanced technology designed to reduce the latency associated with server-side data processing during network transfers. Allowing user-level applications to directly read from and write to remote memory without involving the CPU in multiple memory copies, RDMA bypasses the kernel and writes data directly to the network card. This achieves high throughput, ultra-low latency, and minimal CPU overhead. Presently, RDMA’s transport protocol over Ethernet is RoCEv2 (RDMA over Converged Ethernet v2). RoCEv2, a connectionless protocol based on UDP (User Datagram Protocol), is faster and consumes fewer CPU resources compared to the connection-oriented TCP (Transmission Control Protocol).
Building Lossless Network with RDMA
The RDMA networks achieve lossless transmission through the deployment of PFC and ECN functionalities. PFC technology controls RDMA-specific queue traffic on the link, applying backpressure to upstream devices during congestion at the switch’s ingress port. With ECN technology, end-to-end congestion control is achieved by marking packets during congestion at the egress port, prompting the sending end to reduce its transmission rate.
Optimal network performance is achieved by adjusting buffer thresholds for ECN and PFC, ensuring faster triggering of ECN than PFC. This allows the network to maintain full-speed data forwarding while actively reducing the server’s transmission rate to address congestion.
Accelerating Cluster Performance with GPU Direct-RDMA
The traditional TCP network heavily relies on CPU processing for packet management, often struggling to fully utilize available bandwidth. Therefore, in AI environments, RDMA has become an indispensable network transfer technology, particularly during large-scale cluster training. It surpasses high-performance network transfers in user space data stored in CPU memory and contributes to GPU transfers within GPU clusters across multiple servers. And the Direct-RDMA technology is a key component in optimizing HPC/AI performance, and NVIDIA enhances the performance of GPU clusters by supporting the function of GPU Direct-RDMA.
Streamlining RDMA Product Selection
In building high-performance RDMA networks, essential elements like RDMA adapters and powerful servers are necessary, but success also hinges on critical components such as high-speed optical modules, switches, and optical cables. As a leading provider of high-speed data transmission solutions, FS offers a diverse range of top-quality products, including high-performance switches, 200/400/800G optical modules, smart network cards, and more. These are precisely designed to meet the stringent requirements of low-latency and high-speed data transmission.
Driven by the booming development of cloud computing and big data, InfiniBand has become a key technology and plays a vital role at the core of the data center. But what exactly is InfiniBand technology? What attributes contribute to its widespread adoption? The following guide will answer your questions.
What is InfiniBand?
InfiniBand is an open industrial standard that defines a high-speed network for interconnecting servers, storage devices, and more. It leverages point-to-point bidirectional links to enable seamless communication between processors located on different servers. It is compatible with various operating systems such as Linux, Windows, and ESXi.
InfiniBand Network Fabric
InfiniBand, built on a channel-based fabric, comprises key components like HCA (Host Channel Adapter), TCA (Target Channel Adapter), InfiniBand links (connecting channels, ranging from cables to fibers, and even on-board links), and InfiniBand switches and routers (integral for networking). Channel adapters, particularly HCA and TCA, are pivotal in forming InfiniBand channels, ensuring security and adherence to Quality of Service (QoS) levels for transmissions.
InfiniBand vs Ethernet
InfiniBand was developed to address data transmission bottlenecks in high-performance computing clusters. The primary differences with Ethernet lie in bandwidth, latency, network reliability, and more.
High Bandwidth and Low Latency
InfiniBand provides higher bandwidth and lower latency, meeting the performance demands of large-scale data transfer and real-time communication applications.
InfiniBand supports Remote Direct Memory Access (RDMA), enabling direct data transfer between node memories. This reduces CPU overhead and improves transfer efficiency.
InfiniBand Fabric allows for easy scalability by connecting a large number of nodes and supporting high-density server layouts. Additional InfiniBand switches and cables can expand network scale and bandwidth capacity.
InfiniBand Fabric incorporates redundant designs and fault isolation mechanisms, enhancing network availability and fault tolerance. Alternate paths maintain network connectivity in case of node or connection failures.
The InfiniBand network has undergone rapid iterations, progressing from SDR 10Gbps, DDR 20Gbps, QDR 40Gbps, FDR56Gbps, EDR 100Gbps, and now to HDR 200Gbps and NDR 400Gbps/800Gbps InfiniBand. For those considering the implementation of InfiniBand products in their high-performance data centers, further details are available from FS.com.
It’s known that training large models is done on clusters of machines with preferably many GPUs per server. This article will introduce the professional terminology and common network architecture of GPU computing.
Exploring Key Components in GPU Computing
PCIe Switch Chip
In the domain of high-performance GPU computing, vital elements such as CPUs, memory modules, NVMe storage, GPUs, and network cards establish fluid connections via the PCIe (Peripheral Component Interconnect Express) bus or specialized PCIe switch chips.
NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of muıltiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was first announced in March 2014 and uses proprietary high-speed signaling interconnect (NVHS).
The technology supports full mesh interconnection between GPUs on the same node. And the development from NVLink 1.0, NVLink 2.0, NVLink 3.0 to NVLink 4.0 has significantly enhanced the two-way bandwidth and improved the performance of GPU computing applications.
NVSwitch is a switching chip developed by NVIDIA, designed specifically for high-performance computing and artificial intelligence applications. Its primary function is to provide high-speed, low-latency communication between multiple GPUs within the same host.
Unlike the NVSwitch, which is integrated into GPU modules within a single host, the NVLink Switch serves as a standalone switch specifically engineered for linking GPUs in a distributed computing environment.
Several GPU manufacturers have taken innovative ways to address the speed bottleneck by stacking multiple DDR chips to form so-called high-bandwidth memory (HBM) and integrating them with the GPU. This design removes the need for each GPU to traverse the PCIe switch chip when engaging its dedicated memory. As a result, this strategy significantly increases data transfer speeds, potentially achieving significant orders of magnitude improvements.
In large-scale GPU computing training, performance is directly tied to data transfer speeds, involving pathways such as PCIe, memory, NVLink, HBM, and network bandwidth. Different bandwidth units are used to measure these data rates.
Storage Network Card
The storage network card in GPU architecture connects to the CPU via PCIe, enabling communication with distributed storage systems. It plays a crucial role in efficient data reading and writing for deep learning model training. Additionally, the storage network card handles node management tasks, including SSH (Secure Shell) remote login, system performance monitoring, and collecting related data. These tasks help monitor and maintain the running status of the GPU cluster.
In a full mesh network topology, each node is connected directly to all the other nodes. Usually, 8 GPUs are connected in a full-mesh configuration through six NVSwitch chips, also referred to as NVSwitch fabric.
This fabric optimizes data transfer with a bidirectional bandwidth, providing efficient communication between GPUs and supporting parallel computing tasks. The bandwidth per line depends on the NVLink technology utilized, such as NVLink3, enhancing the overall performance in large-scale GPU clusters.
IDC GPU Fabric
The fabric mainly includes computing network and storage network. The computing network is mainly used to connect GPU nodes and support the collaboration of parallel computing tasks. This involves transferring data between multiple GPUs, sharing calculation results, and coordinating the execution of massively parallel computing tasks. The storage network mainly connects GPU nodes and storage systems to support large-scale data read and write operations. This includes loading data from the storage system into GPU memory and writing calculation results back to the storage system.
The emergence of AI applications and large-scale models (such as ChatGPT) has made computing power an indispensable infrastructure for the AI industry. With the ever-increasing demand for swifter communication in supercomputing, 800G high-speed optical modules have evolved into a crucial component of artificial intelligence servers. Here are some key reasons why the industry is progressively favoring 800G optical transceiver and solutions.
Bandwidth-Intensive AI Workloads
In artificial intelligence computing applications, especially those involving deep learning and neural networks, a significant amount of data is generated that needs to be transmitted over the network. Research indicates that the higher capacity of 800G transceivers helps meet the bandwidth requirements of these intensive workloads.
Data Center Interconnect
With the prevalence of cloud computing, the need for efficient connections within data centers becomes crucial. The 800G optical transceiver enable faster and more reliable connections between data centers, facilitating seamless data exchange and reducing latency.
Transition to Spine-Leaf Architecture
As east-west traffic experiences rapid growth within data centers, the traditional three-tier architecture is encountering progressively challenging tasks and heightened performance demands. The adoption of 800G optical transceiver has propelled the emergence of a Spine-Leaf network architecture, offering multiple advantages such as high bandwidth utilization, outstanding scalability, predictable network latency, and enhanced security.
With the exponential growth in the volume of data processed by artificial intelligence applications, choosing to invest in 800G optical transceivers ensures that the network can meet the continuously growing data demands, providing future-oriented assurance for the infrastructure.
The adoption of 800G optical transceiver offers a forward-looking solution to meet the ongoing growth in data processing and transmission. Indeed, the collaborative interaction between artificial intelligence computing and high-speed optical communication will play a crucial role in shaping the future of information technology infrastructure.
How FS Can Help
The profound impact of artificial intelligence on data center networks highlights the critical role of 800G optical transceivers. Ready to elevate your network experience? As a reliable network solution provider, FS provides a complete 800G product portfolio designed for global hyperscale cloud data centers. Seize the opportunity – register now for enhanced connectivity or apply for a personalized high-speed solution design consultation.
Explore the vast potential of 800G optical modules in the AI era in the following article:
In the era of ultra-high-speed data transmission, MTP/MPO cables have become a key player, especially in the context of 800G networks. In essence, MTP/MPO cables emerge as catalysts for the evolution toward 800G networks, offering a harmonious blend of high-density connectivity, reliability, and scalability. This article will delve into the advantages of MTP/MPO cables in 800G networks and provide specific solutions for constructing an 800G network, offering valuable insights for upgrading your existing data center.
Challenges Faced in 800G Data Transmission
As a critical hub for storing and processing vast amounts of data, data centers require high-speed and stable networks to support data transmission and processing. The 800G network achieves a data transfer rate of 800 Gigabits per second (Gbps) and can meet the demands of large-scale data transmission and processing in data centers, enhancing overall efficiency.
Therefore, many major internet companies are either constructing new 800G data centers or upgrading existing data centers from 100G, 400G to 800G speeds. However, the pursuit of 800G data transmission faces numerous complex challenges that necessitate innovative solutions. Here, we analyze the intricate obstacles associated with achieving ultra-fast data transmission.
Insufficient Bandwidth & High Latency
The 800G network demands extensive data transmission, placing higher requirements on bandwidth. It necessitates network equipment capable of supporting greater data throughput, particularly in terms of connection cables. Ordinary optical fibers typically consist of a single fiber within a cable, and their optical and physical characteristics are inadequate for handling massive data, failing to meet the high-bandwidth requirements of 800G.
While emphasizing high bandwidth, data center networks also require low latency to meet end-user experience standards. In high-speed networks, ordinary optical fibers undergo more refraction and scattering, resulting in additional time delays during signal transmission.
Limited Spatial Layout
The high bandwidth requirements of 800G networks typically come with more connection ports and optical fibers. However, the limited space in data centers or server rooms poses a challenge. Achieving high-density connections requires accommodating more connection devices in the constrained space, leading to crowded layouts and increased challenges in space management and design.
Complex Network Architecture
The transition to an 800G network necessitates a reassessment of network architecture. Upgrading to higher data rates requires consideration of network design, scalability, and compatibility with existing infrastructure. Therefore, the cabling system must meet both current usage requirements and align with future development trends. Given the long usage lifecycle of cabling systems, addressing how to match the cabling installation with multiple IT equipment update cycles becomes a challenging problem.
High Construction Cost
Implementing 800G data transmission involves investments in infrastructure and equipment. Achieving higher data rates requires upgrading and replacing existing network equipment and cabling management patterns, incurring significant costs. Cables, in particular, carry various network devices, and their required lifecycle is longer than that of network equipment. Frequent replacements can result in resource wastage.
Effectively addressing these challenges is crucial to unlocking the full potential of a super-fast, efficient data network.
The significance of MTP/MPO cables in high-speed networks, especially in 800G networks, lies in their ability to manage the escalating data traffic efficiently. The following are key advantages of MTP/MPO cables:
High Density, High Bandwidth
MTP/MPO cables adopt a high-density multi-fiber design, enabling the transmission of multiple fibers within a relatively small connector. This design not only provides ample bandwidth support for data centers, meeting the high bandwidth requirements of an 800G network, but also helps save space and supports the high-density connection needs for large-scale data transfers.
Additionally, MTP/MPO cables exhibit excellent optical and mechanical performance, resulting in low insertion loss in high-speed network environments. By utilizing a low-loss cabling solution, they effectively contribute to reducing latency in the network.
Flexibility and Scalability
MTP/MPO connectors come in various configurations, accommodating different fiber counts (8-core, 12-core, 16-core, 24-core, etc.), supporting both multimode and single-mode fibers. With trunk and breakout designs, support for different polarities, and male/female connector options, these features allow seamless integration into various network architectures. The flexibility and scalability of MTP/MPO connectors enable them to adapt to evolving network requirements and facilitate future expansions, particularly in the context of 800G networks.
The high-density and compact design of MTP/MPO cables contribute to saving rack and data room space, enabling data centers to utilize limited space resources more efficiently. This, in turn, facilitates the straightforward deployment and reliable operation of 800G networks, reducing the risks associated with infrastructure changes or additions in terms of cost and performance. Additionally, MTP/MPO cables featuring a Plenum (OFNP) outer sheath exhibit fire resistance and low smoke characteristics, minimizing potential damage and saving on cabling costs.
Scaling the 800G Networks With MTP/MPO Cables
In the implementation of 800G data transmission, the wiring solution is crucial. MTP/MPO cables, as a key component, provide reliable support for high-speed data transmission. FS provides professional solutions for large-scale data center users who require a comprehensive upgrade to 800G speeds. Aim to rapidly increase data center network bandwidth to meet the growing demands of business.
Newly Built 800G Data Center
Given the rapid expansion of business, many large-scale internet companies choose to build new 800G data centers to enhance their network bandwidth. In these data centers, all network equipment utilizes 800G switches, combined with MTP/MPO cables to achieve a direct-connected 800G network. To ensure high-speed data transmission, advanced 800G 2xFR4/2xLR4 modules are employed between the core switches and backbone switches, and 800G DR8 modules seamlessly interconnect leaf switches with TOR switches.
To simplify connections, a strategic deployment of the 16-core MTP/MPO OS2 trunk cables directly connects to 800G optical modules. This strategic approach maximally conserves fiber resources, optimizes wiring space, and facilitates cable management, providing a more efficient and cost-effective cabling solution for the infrastructure of 800G networks.
Upgrade from 100G to 800G
Certainly, many businesses choose to renovate and upgrade their existing data center networks. In the scenario below, engineers replaced the original 8-core MTP/MPO-LC breakout cable with the 16-core version, connecting it to the existing MTP cassettes. The modules on both ends, previously 100G QSFP28 FR, were upgraded to 800G OSFP XDR8. This seamless deployment migrated the existing structured cabling to an 800G rate. It is primarily due to the 16-core MTP/MPO-LC breakout cable, proven as the optimal choice for direct connections from 800G OSFP XDR8 to 100G QSFP28 FR or from 800G QSFP-DD/OSFP DR8 to 100G QSFP28 DR.
In short, this solution aims to increase the density of fiber optic connections in the data center and optimize cabling space. Not only improves current network performance but also takes into account future network expansion.
Elevating from 400G to the 800G Network
How to upgrade an existing 400G network to 800G in data centers? Let’s explore the best practices through MTP/MPO cables to achieve this goal.
Based on the original 400G network, the core, backbone, and leaf switches have all been upgraded to an 800G rate, while the TOR (Top of Rack) remains at a 400G rate. The core and backbone switches utilize 800G 2xFR4/2xLR4 modules, the leaf switches use 800G DR8 modules, and the TOR adopts 400G DR4 modules. Deploying two 12-core MTP/MPO OS2 trunk cables in a breakout configuration between the 400G and 800G optical modules facilitates interconnection.
This cabling solution enhances scalability, prevents network bottlenecks, reduces latency, and is conducive to expanding bandwidth when transitioning from lower-speed to higher-speed networks in the future. Additionally, this deployment retains the existing network equipment, significantly lowering cost expenditures.
Ultimately, the diverse range of MTP/MPO cable types provides tailored solutions for different connectivity scenarios in 800G networks. As organizations navigate the complexities of high-speed data transmission, MTP/MPO cables stand as indispensable enablers, paving the way for a new era of efficient and robust network infrastructures.
How FS Can Help
The comprehensive networking solutions and product offerings not only save costs but also reduce power consumption, delivering higher value. Considering an upgrade to 800G for your data center network? FS tailors customized solutions for you. Don’t wait any longer—Register as an FS website member now and enjoy free technical support.
Choosing the right MTP/MPO cable ensures efficient and reliable data transmission in today’s fast-paced digital world. With the increasing demand for high-speed connectivity, it is essential to understand the importance of core numbers in MTP/MPO cables. In this guide, we will explore the significance of core numbers and provide valuable insights to help you decide when selecting the right MTP/MPO cable for your specific needs. Whether setting up a data center or upgrading your existing network infrastructure, this article will serve as a comprehensive resource to assist you in choosing the right MTP/MPO cable.
What is an MTP/MPO cable
An MTP/MPO cable is a high-density fiber optic cable that is commonly used in data centers and telecommunications networks. It is designed to provide a quick and efficient way to connect multiple fibers in a single connector.
MPO and MTP cables have many attributes in common, which is why both are so popular. The key defining characteristic is that these cables have pre-terminated fibers with standardized connectors. While other fiber optic cables have to be painstakingly arrayed and installed at each node in a data center, these cables are practically plug-and-play. To have that convenience while still providing the highest levels of performance makes them a top choice for many data center applications.
MTP/MPO trunk cables, typically used for creating backbone and horizontal interconnections, have an MTP/MPO connector on both ends and are available from 8 fibers up to 48 in one cable.
MTP/MPO Harness/Breakout Cables
Harness/Breakout cables are used to break out the MTP/MPO connector into individual connectors, allowing for easy connection to equipment. MTP/MPO conversion cables are used to convert between different connector types, such as MTP to LC or MTP to SC.
The MTP/MPO cables also come in different configurations, such as 8-core, 12-core, 16-core, 32-core, and more, depending on the specific needs of the application. This flexibility in configurations enables users to tailor their choices according to the scale and performance requirements of their networks or data centers. As technology advances, the configurations of MTP/MPO cables continually evolve to meet the increasing demands of data transmission.
How to Choose MTP/MPO cables
Selecting the appropriate core number for MTP/MPO cables resonates throughout the efficiency and performance of networks. In this section, we’ll delve into the decision-making factors surrounding core numbers in cables.
Network Requirements and Data Transmission Goals
Different network applications and data transmission needs may require varying numbers of cores. High-density data centers might necessitate more cores to support large-capacity data transmission, while smaller networks may require fewer cores.
Compatibility with Existing Infrastructure
When choosing the core number for MTP/MPO cables, compatibility with existing infrastructure is crucial. Ensuring that the new cables match existing fiber optic equipment and connectors helps avoid unnecessary compatibility issues.
Consideration for Future Scalability
As businesses grow and technology advances, future network demands may increase. Choosing MTP/MPO cables with a larger number of cores allows for future expansion and upgrades.
Budget and Resource Constraints
Budget and resources also play a role in core number selection. Cables with a larger number of cores tend to be more expensive, while cables with fewer cores may be more cost-effective. Therefore, finding a balance between actual requirements and the available budget is essential.
MTP/MPO Cabling Guide to Core Numbers
40G MTP/MPO Cabling
A 12-fiber MTP/MPO connector interface can accommodate 40G, which is usually used in a 40G data center. The typical implementations of MTP/MPO plug-and-play systems split a 12-fiber trunk into six channels that run up to 10 Gigabit Ethernet (depending on the length of the cable). 40G system uses a 12-fiber trunk to create a Tx/Rx link, dedicating 4 fibers for 10G each of upstream transmit, and 4 fibers for 10G each of downstream receive.
In this scenario, a 40G QSFP+ port on the FS S5850 48S6Q switch is split up into 4 10G channels. An 8-fiber MTP-LC harness cable connects the 40G side with its MTP connector and the four LC connectors link with the 10G side.
As shown below, a 12-fiber MTP trunk cable is used to connect two 40G optical transceivers to realize the 40G to 40G connection between the two switches. The connection method can also be applied to a 100G-100G connection.
40G Trunk Cabling
24 Fibers MTP® to MTP® Interconnect Conversion Harness Cable is designed to provide a more flexible multi-fiber cabling system based on MTP® products. Unlike MTP® harness cable, MTP® conversion cables are terminated with MTP® connectors on both ends and can provide more possibilities for the existing 24-fiber cabling system. The 40/100G MTP® conversion cables eliminate the wasted fibers in the current 40G transmission and upcoming 100G transmission. Compared to purchasing and installing separate conversion cassettes, using MTP® conversion cables is a more cost-effective and lower-loss option.
100G MTP/MPO Cabling
QSFP28 100G transceivers using 4 fiber pairs have an MTP/MPO 12f port (with 4 unused fibers). Transmission for short distances (up to 100m) could be done most cost-effectively over multimode fiber using SR4 transmission. Longer distances over single mode use PSM4 transmission over 8 fibers. Transmission over 4 fiber pairs enables both multimode and single-mode transceivers to be connected 1:4 using MPO-LC 8 fiber breakout cables. One QSFP28 100G can connect to four SFP28 25G transceivers.
100G SR4 Parallel BASE-8 over Multimode Fibre
QSFP28 100G SR4 are often connected directly together due to their proximity within switching areas.
Equally QSFP28 SR4 are often connected directly to SFP28 25G ports within the same rack. For example, from a switch 100G port to four different servers with 25G ports.
The 12-core MTP/MPO cables can also be used for 100G parallel to parallel connection. Through the use of MTP patch panels, network reliability is enhanced, ensuring the normal operation of other channels even if a particular channel experiences a failure. Additionally, by increasing the number of parallel channels, it can meet the continuously growing data demands. This flexibility is crucial for adapting to future network expansions.
100G PMS4 Parallel BASE-8 over Singmode Fibre
QSFP28 100G PMS4 are often connected directly together due to their proximity within switching areas.
Equally QSFP28 ports are often connected directly to SFP28 25G ports within the same rack. For example, from a switch 100G port to four different servers with 25G ports.
200G MTP/MPO Cabling
Although most equipment manufacturers (Cisco, Juniper, Arista, etc) are bypassing 200G and jumping from 100G to 400G, there are still some 200G QSFP-DD transceivers on the market, like FS QSFP56-SR4-200G and QSFP-FR4-200G.
MTP (MPO) 12 fiber enables the connection of 2xQSFP56-SR4-200G to each other.
400G MTP/MPO Cabling
MTP/MPO cables with multi-core connectors are used for optical transceiver connection. There are 4 different types of application scenarios for 400G MTP/MPO cables. Common MTP/MPO patch cables include 8-fiber, 12-core, and 16-core. 8-core or 12-core MTP/MPO single-mode fiber patch cable is usually used to complete the direct connection of two 400G-DR4 optical transceivers. 16-core MTP/MPO fiber patch cable can be used to connect 400G-SR8 optical transceivers to 200G QSFP56 SR4 optical transceivers, and can also be used to connect 400G-8x50G to 400G-4x100G transceivers. The 8-core MTP to 4-core LC duplex fiber patch cable is used to connect the 400G-DR4 optical transceiver with a 100G-DR optical transceiver.
In the higher-speed 800G networking landscape, the high density, high bandwidth, and flexibility of MTP/MPO cables have played a crucial role. Leveraging various branching or direct connection schemes, MTP/MPO cables are seamlessly connected to 800G optical modules, 400G optical modules, and 100G optical modules, enhancing the richness and flexibility of network construction.
800G Connectivity with Direct Connect Cabling
16 Fibers MTP® trunk cable is designed for 800G QSFP-DD/OSFP DR8 and 800G OSFP XDR8 optics direct connection and supporting 800G transmission for Hyperscale Data Center.
800G to 8X100G Interconnect
16 fibers MTP®-LC breakout cables are optimized for 800G OSFP XDR8 to 100G QSFP28 FR, 800G QSFP-DD/OSFP DR8 to 100G QSFP28 DR optics direct connection, and high-density data center applications.
800G to 2X400G Interconnect
16 fiber MTP® conversion cable is designed to provide a more flexible multi-fiber cabling system based on MTP® products. Compared to purchasing and installing separate conversion cassettes, using MTP® conversion cables is a more cost-effective and lower-loss option. In the network upgrade from 400G to 800G, the ability to directly connect an 800G optical module and two 400G optical modules provides a more efficient use of cabling space, resulting in cost savings for cabling.
In a word, the choice of core number for MTP/MPO cables depends on the specific requirements of the network application. Matching the core number with the requirements of each scenario ensures optimal performance and efficient resource utilization. A well-informed choice ensures that your MTP/MPO cable not only meets but exceeds the demands of your evolving connectivity requirements.
How FS Can Help
As a global leader in enterprise-level ICT solutions, FS not only offers a variety of MTP/MPO cables but also customizes exclusive MTP/MPO cabling solutions based on your requirements, helping your data center network achieve a smooth upgrade. In the era of rapid growth in network data, the time has come to make a choice – FS escorts your data center upgrade. Register as an FS website member and enjoy free technical support.
When designing a new network architecture based on 10GB Ethernet, we face the challenge of choosing the right equipment to achieve maximum performance and support the future demands of complex network applications.
There are two options for 10Gb Ethernet interconnection: 10GBASE-T and SFP+ solutions (SFP+ and DAC/AOC). 10GBASE-T copper cable modules can span network links of up to 100 meters using cat 6a/cat 7 cables. SFP+ optical devices will support distances of up to 300 meters on multimode fiber and up to 80 kilometers on single-mode fiber.
What are the differences?
SFP+ fiber offers lower latency and cost, and the power consumption of SFP+ solutions is also significantly lower, with the power consumption of 10GBASE-T being approximately three to four times that of the SFP+ solution. Moreover, 1Gb SFP transceivers can be inserted into SFP+ ports, functioning at a speed of 1Gb and linking through optical cables to conventional ports. They can also be plugged into SFP modules that are compatible with 1GBase-T, establishing connections at lower speeds with traditional ports.
However, 10GBASE-T copper cabling provides effective backward compatibility with standard copper network equipment, making optimal use of existing copper infrastructure wiring. Additionally, 10GBASE-T is backward compatible with 1G ports, and many low-bandwidth devices still use 1G ports. Compared to SFP+ solutions for small enterprises, 10GBASE-T is generally more cost-effective and easier to deploy.
In comparison, if scalability and flexibility are crucial for small enterprise applications, then 10GBASE-T cabling is the better choice. However, if power efficiency and lower latency are paramount, then 10G SFP+ cabling is clearly the winner.
You will find three common types of 10G SFP+ modules – SFP-10G-SR, SFP-10G-LRM, and SFP-10G-LR, typically used for optical fiber. However, in practical use, how should we choose among these three modules? This article will analyze it for you.
Exploring the Versatility of SFP-10G-SR, SFP-10G-LR, and SFP-10G-LRM Modules
SFP-10G-SR can be paired with OM3 multimode fiber (MMF), with a transmission distance of up to 300 meters. It is acclaimed as the lowest cost and lowest power consumption module utilizing VCSEL.
SFP-10G-LR is a module using a distributed feedback laser (DFB). It operates at a wavelength of 1310nm, and its transmission distance through single-mode fiber (SMF) can reach 10 kilometers. It is used for building wiring in large campus areas and even for establishing a Metropolitan Area Network (MAN).
SFP-10G-LRM supports a link length of 220m on standard Fiber Distributed Data Interface (FDDI) grade multimode fiber. To ensure compliance with FDDI grade, OM1, and OM2 fiber specifications, the transmitter should be coupled with a mode conditioning patch cable. Applications on OM3 or OM4 do not require a mode conditioning patch cable.
In general, when the transmission distance is less than 300 meters, it is recommended to use SFP-10G-SR. However, if you have other requirements, such as a 200m transmission with a mode bandwidth of 500 MHz km, then an SFP-10G-LRM transceiver is needed. For single-mode transmission within 300 meters, choosing SFP-10G-LRM is an economical solution. But for transmissions of 2-10 kilometers, SFP-10G-LR is the only choice.
FS is capable of offering a diverse range of 10GSFP+ models, and we can tailor solutions to meet your specific requirements. If you are still contemplating, take action now by clicking to register, and benefit from complimentary technical support.