Network Virtualisation: NVGRE vs. VXLAN Explained

The rise of virtualisation technology has revolutionised data centres, enabling the operation of multiple virtual machines on the same physical infrastructure. However, traditional data centre network designs are not well-suited to these new applications, necessitating a new approach to address these challenges. NVGRE and VXLAN were created to meet this need. This article delves into NVGRE and VXLAN, exploring their differences, similarities, and advantages in various scenarios.

Unleashing the Power of NVGRE Technology

NVGRE (Network Virtualization using Generic Routing Encapsulation) is a network virtualisation method designed to overcome the limitations of traditional VLANs in complex virtual environments.

How It Works

NVGRE encapsulates data packets by adding a Tenant Network Identifier (TNI) to the packet, transmitting it over existing IP networks, and then decapsulating and delivering it on the target host. This enables large-scale virtual networks to be more flexible and scalable on physical infrastructure.

1.Tenant Network Identifier (TNI)

NVGRE introduces a 24-bit TNI to identify different virtual networks or tenants. Each TNI corresponds to a separate virtual network, allowing multiple virtual networks to operate on the same physical infrastructure without interference.

2. Packet Encapsulation

Source MAC Address: The MAC address of the sending VM.

Destination MAC Address: The MAC address of the receiving VM.

TNI: The 24-bit virtual network identifier.

Original Ethernet Frame: Includes the source MAC address, destination MAC address, Ethernet protocol type (usually IPv4 or IPv6), etc.

Data packets are encapsulated into NVGRE packets for communication between VMs.

3. Transport Network

NVGRE packets are transmitted over existing IP networks, including physical or virtual networks. The IP header information is used for routing, while the TNI identifies the target virtual network.

4. Decapsulation

When NVGRE packets reach the host of the target VM, the host decapsulates them, extracting the original Ethernet frame and delivering it to the target VM.

5. MAC Address Table Maintenance

NVGRE hosts maintain a MAC address table to map VM MAC addresses to TNIs. When a host receives an NVGRE packet, it looks up the MAC address table to determine which VM to deliver the packet to.

6. Broadcast and Multicast Support

NVGRE uses broadcast and multicast to support communication within virtual networks, allowing VMs to perform broadcast and multicast operations for protocols like ARP and Neighbor Discovery.

Features

  • Network Virtualisation Goals: NVGRE aims to provide a larger number of VLANs for multi-tenancy and load balancing, overcoming the limited VLAN capacity of traditional networks.
  • Encapsulation and Tunneling: Uses encapsulation and tunneling to isolate virtual networks, making VM communication appear direct without considering the underlying physical network.
  • Cross-Data Centre Scalability: Designed to support cross-location virtual networks, ideal for distributed data centre architectures.

A Comprehensive Look at VXLAN Technology

VXLAN (Virtual Extensible LAN) is a network virtualisation technology designed to address the shortage of virtual networks in large cloud data centres.

How It Works

VXLAN encapsulates data packets by adding a Virtual Network Identifier (VNI), transmitting them over existing IP networks, and then decapsulating and delivering them on the target host.

1.Virtual Network Identifier (VNI)

VXLAN introduces a 24-bit VNI to distinguish different virtual networks. Each VNI represents a separate virtual network, allowing multiple virtual networks to operate on the same physical infrastructure without interference.

2.Packet Encapsulation

Source IP Address: The IP address of the sending VM.

Destination IP Address: The IP address of the receiving VM.

UDP Header: Contains source and destination port information to identify VXLAN packets.

VNI: The 24-bit virtual network identifier.

Original Ethernet Frame: Includes the source MAC address, destination MAC address, Ethernet protocol type, etc.

Data packets are encapsulated into VXLAN packets for communication between VMs.

3.Transport Network
VXLAN packets are transmitted over existing IP networks. The IP header information is used for routing, while the VNI identifies the target virtual network.

4.Decapsulation
When VXLAN packets reach the host of the target VM, the host decapsulates them, extracting the original Ethernet frame and delivering it to the target VM.

5.MAC Address Table Maintenance
VXLAN hosts maintain a MAC address table to map VM MAC addresses to VNIs. When a host receives a VXLAN packet, it looks up the MAC address table to determine which VM to deliver the packet to.

6.Broadcast and Multicast Support
VXLAN uses multicast to simulate broadcast and multicast behaviour within virtual networks, supporting protocols like ARP and Neighbor Discovery.

Features

  • Expanded VLAN Address Space: Extends VLAN identifier capacity from 4096 to 16 million with a 24-bit segment ID.
  • Virtual Network Isolation: Allows multiple virtual networks to coexist on the same infrastructure, each with a unique segment ID.
  • Multi-Tenancy Support: Ideal for environments where different tenants need isolated virtual networks.
  • Layer 2 and 3 Extension: Supports complex network topologies and routing configurations.
  • Industry Support: Widely supported by companies like Cisco, VMware, and Arista Networks.

NVGRE vs VXLAN: Uncovering the Best Virtualization Tech

NVGRE and VXLAN are both technologies for virtualising data centre networks, aimed at addressing issues in traditional network architectures such as isolation, scalability, and performance. While their goals are similar, they differ in implementation and several key aspects.

Supporters and Transport Protocols

NVGRE is supported mainly by Microsoft, using GRE as the transport protocol. VXLAN is driven by Cisco, using UDP.

Packet Format

VXLAN packets have a 24-bit VNI for 16 million virtual networks. NVGRE uses the GRE header’s lower 24 bits as the TNI, also supporting 16 million virtual networks.

Transmission Method

VXLAN uses multicast to simulate broadcast and multicast for MAC address learning and discovery. NVGRE uses multiple IP addresses for enhanced load balancing without relying on flooding and IP multicast.

Fragmentation

NVGRE supports fragmentation to manage MTU sizes, while VXLAN typically requires the network to support jumbo frames and does not support fragmentation.

Conclusion

VXLAN and NVGRE represent significant advancements in network virtualisation, expanding virtual network capacity and enabling flexible, scalable, and high-performance cloud and data centre networks. With support from major industry players, these technologies have become essential for building agile virtualised networking environments.

How FS Can Help

FS offers a wide range of data centre switches, from 1G to 800G, to meet various network requirements and applications. FS switches support VXLAN EVPN architectures and MPLS forwarding, with comprehensive protocol support for L3 unicast and multicast routing, including BGP, OSPF, EIGRP, RIPv2, PIM-SM, SSM, and MSDP. Explore FS high-quality switches and expert solutions tailored to enhance your network at the FS website.

Stacking Technology vs MLAG Technology: What Sets Apart?

Businesses are growing and networks are becoming more complex. Single-device solutions are having trouble meeting the high availability and performance requirements of modern data centres. To address this, two horizontal virtualisation technologies have emerged: Stacking and Multichassis Link Aggregation Group (MLAG). This article compares Stacking and MLAG. It discusses their principles, features, advantages, and disadvantages. This comparison can help you choose the best option for your network environment.

Understanding Stacking Technology

Stacking technology involves combining multiple stackable devices into a single logical unit. Users can control and use multiple devices together, increasing ports and switching abilities while improving reliability with mutual backup between devices.

Advantages of Stacking:

  • Simplified Management: Managed via a single IP address, reducing management complexity. Administrators can configure and monitor the entire stack from one interface.
  • Increased Port Density: Combining multiple switches offers more ports, meeting the demands of large-scale networks.
  • Seamless Redundancy: If one stack member fails, others seamlessly take over, ensuring high network availability.
  • Enhanced Performance: Increased interconnect bandwidth among switches improves data exchange efficiency and performance.

Unlocking the Power of MLAG Technology

Multichassis Link Aggregation Group (MLAG) is a newer cross-device link aggregation technology. It allows two access switches to negotiate link aggregation as if they were one device. This cross-device link aggregation enhances reliability from the single-board level to the device level, making MLAG suitable for modern network topologies requiring redundancy and high availability.

Advantages of MLAG:

  • High Availability: Increases network availability by allowing smooth traffic transition between switches in case of failure. There are no single points of failure at the switch level.
  • Improved Bandwidth: Aggregating links across multiple switches significantly increases accessible bandwidth, beneficial for high-demand environments.
  • Load Balancing: Evenly distributes traffic across member links, preventing overloads and maximising network utilisation.
  • Compatibility and Scalability: Better compatibility and scalability, able to negotiate link aggregation with devices from different vendors.

Stacking vs. MLAG: Which Network Virtualisation Tech Reigns Supreme?

Both Stacking and MLAG are crucial for achieving redundant access and link redundancy, significantly enhancing the reliability and scalability of data centre networks. Despite their similarities, each has distinct advantages, disadvantages, and suitable application scenarios. Understanding the concepts and advantages of Stacking and MLAG is crucial. Here’s a detailed comparison to help you distinguish between the two:

Reliability

Stacking: Centralised control plane shared by all switches, with the master switch managing the stack. Failure of the master switch can affect the entire system despite backup switches.

MLAG: Each switch operates with an independent control plane. Consequently, the failure of one switch does not impact the functionality of the other, effectively isolating fault domains and enhancing overall network reliability.

Configuration Complexity

Stacking: Appears as a single device logically, simplifying configuration and management.

MLAG: Requires individual configuration of each switch but can be simplified with modern management tools and automation scripts.

Cost

Stacking: Requires specialised stacking cables, adding hardware costs.

MLAG: Requires peer-link cables, which incur costs comparable to stacking cables.

Performance

Stacking: Performance may be limited by the master switch’s CPU load, affecting overall system performance.

MLAG: Each switch independently handles data forwarding, distributing CPU load and enhancing performance.

Upgrade Complexity

Stacking: Higher upgrade complexity, needing synchronised upgrades of all member devices, with longer operation times and higher risks.

MLAG: Lower upgrade complexity, allowing independent upgrades of each device, reducing complexity and risk.

Upgrade Downtime

Stacking: The duration of downtime varies between 20 seconds and 1 minute, contingent upon the traffic load.

MLAG: Minimal downtime, usually within seconds, with negligible impact.

Network Design

Stacking: Simpler design, appearing as a single device, easier to manage and design.

MLAG: More complex design, logically still two separate devices, requiring more planning and management.

Enhancing Display Networks: Stacking vs. MLAG Applications

This section explains how these technologies are used in real-world situations after learning about Stacking and MLAG differences. This will help you make informed decisions when setting up a network.

Stacking is suitable for small to medium-sized network environments that require simplified management and configuration and enhanced redundancy. It is widely used in enterprise campus networks and small to medium-sized data centres.

MLAG, on the other hand, is ideal for large data centres and high-density server access environments that require high availability and high performance. It offers redundancy and load balancing across devices. The choice between these technologies depends on the specific needs, scale, and complexity of your network.

In practical situations, Stacking and MLAG technologies can be combined to take advantage of their strengths. This creates a synergistic effect that is stronger than each technology individually. Stacking technology simplifies the network topology, increasing bandwidth and fault tolerance. MLAG technology provides redundancy and load balancing, enhancing network availability.

Therefore, consider integrating Stacking and MLAG technologies to achieve better network performance and reliability when designing and deploying enterprise networks.

Conclusion

Both Multichassis Link Aggregation (MLAG) and stackable switches offer unique advantages in modern network architectures. MLAG ensures backup and reliability with cross-switch link aggregation. Stackable switches allow for easy management and scalability by acting as one unit. Understanding the specific requirements and use cases of each technology is essential for designing resilient and efficient network infrastructures.

How FS Can Help

FS, a trusted global ICT products and solutions provider, offers a range of data centre switches to meet diverse enterprise needs. FS data centre switches support a variety of features and protocols, including stacking, MLAG, and VXLAN, making them suitable for diverse network construction. Customised solutions tailored to your requirements can assist with network upgrades. Visit the FS website to explore products and solutions that can help you build a high-performance network today.

VXLAN VS. MPLS: From Data Centre to Metropolitan Area Network

In recent years, the advancement of cloud computing, virtualisation, and containerisation technologies has driven the adoption of network virtualisation. Both MPLS and VXLAN leverage virtualisation concepts to create logical network architectures, enabling more complex and flexible domain management. However, they serve different purposes. This article will compare VXLAN and MPLS, explaining why VXLAN is more popular than MPLS in metropolitan and wide area networks.

Understanding VXLAN and MPLS: Key Concepts Unveiled

VXLAN

Virtual Extensible LAN (VXLAN) encapsulates Layer 2 Ethernet frames within Layer 3 UDP packets, enabling devices and applications to communicate over a large physical network as if they were on the same Layer 2 Ethernet network. VXLAN technology uses the existing Layer 3 network as an underlay to create a virtual Layer 2 network, known as an overlay. As a network virtualisation technology, VXLAN addresses the scalability challenges associated with large-scale cloud computing setups and deployments.

MPLS

Multi-Protocol Label Switching (MPLS) is a technology that uses labels to direct data transmission quickly and efficiently across open communication networks. The term “multi-protocol” indicates that MPLS can support various network layer protocols and is compatible with multiple Layer 2 data link layer technologies. This technology simplifies data transmission between two nodes by using short path labels instead of long network addresses. MPLS allows the addition of more sites with minimal configuration. It is also independent of IP, merely simplifying the implementation of IP addresses. MPLS over VPN adds an extra layer of security since MPLS itself lacks built-in security features.

Data Centre Network Architecture Based on MPLS

MPLS Layer 2 VPN (L2VPN) provides Layer 2 connectivity across a Layer 3 network, but it requires all routers in the network to be IP/MPLS routers. Virtual networks are isolated using MPLS pseudowire encapsulation and can stack MPLS labels, similar to VLAN tag stacking, to support a large number of virtual networks.

IP/MPLS is commonly used in telecom service provider networks, so many service providers’ L2VPN services are implemented using MPLS. These include point-to-point L2VPN and multipoint L2VPN implemented according to the Virtual Private LAN Service (VPLS) standard. These services typically conform to the MEF Carrier Ethernet service definitions of E-Line (point-to-point) and E-LAN (multipoint).

Because MPLS and its associated control plane protocols are designed for highly scalable Layer 3 service provider networks, some data centre operators have adopted MPLS L2VPN in their data centre networks to overcome the scalability and resilience limitations of Layer 2 switched networks, as shown in the diagram.

Why is VXLAN Preferred Over MPLS in Data Centre Networks?

Considering the features and applications of both technologies, the following points summarise why VXLAN is more favoured:

Cost of MPLS Routers

For a long time, some service providers have been interested in building cost-effective metropolitan networks using data centre-grade switches. Over 20 years ago, the first generation of competitive metro Ethernet service providers, like Yipes and Telseon, built their networks using the most advanced gigabit Ethernet switches available in enterprise networks at the time. However, such networks struggled to provide the scalability and resilience required by large service providers (SPs). Consequently, most large SPs shifted to MPLS (as shown in the diagram below). However, MPLS routers are more expensive than ordinary Ethernet switches, and this cost disparity has persisted over the decades. Today, data centre-grade switches combined with VXLAN overlay architecture can largely eliminate the shortcomings of pure Layer 2 networks without the high costs of MPLS routing, attracting a new wave of SPs.

Tight Coupling Between Core and Edge

MPLS-based VPN solutions require tight coupling between edge and core devices, meaning every node in the data centre network must support MPLS. In contrast, VXLAN only requires a VTEP (VXLAN Tunnel Endpoint) in edge nodes (e.g., leaf switches) and can use any IP-capable device or IP transport network to implement data centre spine and data centre interconnect (DCI).

MPLS Expertise

Outside of large service providers, MPLS technology is challenging to learn, and relatively few network engineers can easily build and operate MPLS-based networks. VXLAN, being simpler, is becoming a fundamental technology widely mastered by data centre network engineers.

Advancements in Data Centre Switching Technology

Modern data centre switching chips have integrated numerous functions that make metro networks based on VXLAN possible. Here are two key examples:

  • Hardware-based VTEP supporting line-rate VXLAN encapsulation.
  • Expanded tables providing the routing and forwarding scale required to create resilient, scalable Layer 3 underlay networks and multi-tenant overlay services.

Additionally, newer data centre-grade switches have powerful CPUs capable of supporting advanced control planes crucial for extended Ethernet services, whether it’s BGP EVPN (a protocol-based approach) or an SDN-based protocol-less control plane. Therefore, in many metro network applications, specialised (and thus high-cost) routing hardware is no longer necessary.

VXLAN Overlay Architecture for Metropolitan and Wide Area Networks

Overlay networks have been widely adopted in various applications such as data centre networks and enterprise SD-WAN. A key commonality among these overlay networks is their loose coupling with the underlay network. Essentially, as long as the network provides sufficient capacity and resilience, the underlay network can be constructed using any network technology and utilise any control plane. The overlay is only defined at the service endpoints, with no service provisioning within the underlay network nodes.

One of the primary advantages of SD-WAN is its ability to utilise various networks, including broadband or wireless internet services, which are widely available and cost-effective, providing sufficient performance for many users and applications. When VXLAN overlay is applied to metropolitan and wide area networks, similar benefits are also realised, as depicted in the diagram.

When building a metropolitan network to provide services like Ethernet Line (E-Line), Multipoint Ethernet Local Area Network (E-LAN), or Layer 3 VPN (L3VPN), it is crucial to ensure that the Underlay can meet the SLA (Service Level Agreement) requirements for such services.

VXLAN-Based Metropolitan Network Overlay Control Plane Options

So far, our focus has mainly been on the advantages of VXLAN over MPLS in terms of network architecture and capital costs, i.e., the advantages of the data plane. However, VXLAN does not specify a control plane, so let’s take a look at the Overlay control plane options.

The most prominent control plane option for creating VXLAN Overlay and providing Overlay services should be BGP EVPN, which is a protocol-based approach that requires service configuration in each edge node. The main drawback of BGP EVPN is the complexity of operations.

Another protocol-less approach is using SDN and services defined in an SDN controller to programme the data plane of each edge node. This approach eliminates much of the operational complexity of protocol-based BGP EVPN. Nonetheless, the centralised SDN controller architecture, suitable for single-site data centre architectures, presents significant scalability and resilience issues when implemented in metropolitan and wide area networks. As a result, it’s unclear whether it’s a superior alternative to MPLS for metropolitan networks.

There’s also a third possibility—decentralised or distributed SDN, in which the SDN controller’s functionality is duplicated and spread across the network. This can also be referred to as a “controller-less” SDN because it doesn’t necessitate a separate controller server/device, thereby completely resolving the scalability and resilience problems associated with centralised SDN control while maintaining the advantages of simplified and expedited service configuration.

Deployment Options

Due to VXLAN’s ability to decouple Overlay services delivery from the Underlay network, it creates deployment options that MPLS cannot match, such as virtual service Overlays on existing IP infrastructure, as shown in the diagram. VXLAN-based switch deployments at the edge of existing networks, scalable according to business requirements, allow for the addition of new Ethernet and VPN services and thus generate new revenue without altering the existing network.

VXLAN Overlay Deployment on Existing Metropolitan Networks

The metropolitan network infrastructure shown in Figure 2 can support all services offered by an MPLS-based network, including commercial internet, Ethernet and VPN services, as well as consumer triple-play services. Moreover, it completely eliminates the costs and complexities associated with MPLS.

Converged Metropolitan Core with VXLAN Service Overlay

Conclusion

VXLAN has become the most popular overlay network virtualization protocol in data centre network architecture, surpassing many alternative solutions. When implemented with hardware-based VTEPs in switches and DPUs, and combined with BGP EVPN or SDN control planes and network automation, VXLAN-based overlay networks can provide the scalability, agility, high performance, and resilience required for distributed cloud networks in the foreseeable future.

How FS Can Help

FS is a trusted provider of ICT products and solutions to enterprise customers worldwide. Our range of data centre switches covers multiple speeds, catering to diverse business needs. We offer personalised customisation services to tailor exclusive solutions for you and assist with network upgrades.

Explore the FS website today, choose the products and solutions that best suit your requirements, and build a high-performance network.

Network Virtualisation: VXLAN Benefits & Differences

With the rapid development of cloud computing and virtualisation technologies, data centre networks are facing increasing challenges. Traditional network architectures have limitations in meeting the demands of large-scale data centres, particularly in terms of scalability, isolation, and flexibility. To overcome these limitations and provide better performance and scalability for data centre networks, VXLAN (Virtual Extensible LAN) has emerged as an innovative network virtualisation technology. This article will detail the principles and advantages of VXLAN, its applications in data centre networks, and help you understand the differences between VXLAN and VLAN.

The Power of VXLAN: Transforming Data Centre Networks

VXLAN is a network virtualisation technology designed to overcome the limitations of traditional Ethernet, offering enhanced scalability and isolation. It enables the creation of a scalable virtual network on existing infrastructure, allowing virtual machines (VMs) to move freely within a logical network, regardless of the underlying physical network topology. VXLAN achieves this by creating a virtual Layer 2 network over an existing IP network, encapsulating traditional Ethernet frames within UDP packets for transmission. This encapsulation allows VXLAN to operate on current network infrastructure without requiring extensive modifications.

VXLAN uses a 24-bit VXLAN Network Identifier (VNI) to identify virtual networks, allowing multiple independent virtual networks to coexist simultaneously. The destination MAC address of a VXLAN packet is replaced with the MAC address of the virtual machine or physical host within the VXLAN network, enabling communication between virtual machines. VXLAN also supports multipath transmission through MP-BGP EVPN and provides multi-tenant isolation within the network.

How it works

  • Encapsulation: When a virtual machine (VM) sends an Ethernet frame, the VXLAN module encapsulates it in a UDP packet. The source IP address of the packet is the IP address of the host where the VM resides, and the destination IP address is that of the remote endpoint of the VXLAN tunnel. The VNI field in the VXLAN header identifies the target virtual network. The UDP packet is then transmitted through the underlying network to reach the destination host.
  • Decapsulation: Upon receiving a VXLAN packet, the VXLAN module parses the UDP packet header to extract the encapsulated Ethernet frame. By examining the VNI field, the VXLAN module identifies the target virtual network and forwards the Ethernet frame to the corresponding virtual machine or physical host.

This process of encapsulation and decapsulation allows VXLAN to transparently transport Ethernet frames over the underlying network, while simultaneously providing logically isolated virtual networks.

Key Components

  • VXLAN Identifier (VNI): Used to distinguish different virtual networks, similar to a VLAN identifier.
  • VTEP (VXLAN Tunnel Endpoint): A network device responsible for encapsulating and decapsulating VXLAN packets, typically a switch or router.
  • Control Plane and Data Plane: The control plane is responsible for establishing and maintaining VXLAN tunnels, while the data plane handles the actual data transmission.

The Benefits of VXLAN: A Changer for Virtual Networks

VXLAN, as an emerging network virtualisation technology, offers several advantages in data centre networks:

Scalability

VXLAN uses a 24-bit VNI identifier, supporting up to 16,777,216 virtual networks, each with its own independent Layer 2 namespace. This scalability meets the demands of large-scale data centres and supports multi-tenant isolation.

Cross-Subnet Communication

Traditional Ethernet relies on Layer 3 routers for forwarding across different subnets. VXLAN, by using the underlying IP network as the transport medium, enables cross-subnet communication within virtual networks, allowing virtual machines to migrate freely without changing their IP addresses.

Flexibility

VXLAN can operate over existing network infrastructure without requiring significant modifications. It is compatible with current network devices and protocols, such as switches, routers, and BGP. This flexibility simplifies the creation and management of virtual networks.

Multipath Transmission

VXLAN leverages multipath transmission (MP-BGP EVPN) to achieve load balancing and redundancy in data centre networks. It can choose the optimal path for data transmission based on network load and path availability, providing better performance and reliability.

Security

VXLAN supports tunnel encryption, ensuring data confidentiality and integrity over the underlying IP network. Using secure protocols (like IPsec) or virtual private networks (VPNs), VXLAN can offer a higher level of data transmission security.

VXLAN vs. VLAN: Unveiling the Key Differences

VXLAN (Virtual Extensible LAN) and VLAN (Virtual Local Area Network) are two distinct network isolation technologies that differ significantly in their implementation, functionality, and application scenarios.

Implementation

VLAN: VLAN is a Layer 2 (data link layer) network isolation technology that segments a physical network into different virtual networks using VLAN identifiers (VLAN IDs) configured on switches. VLANs use VLAN tags within a single physical network to identify and isolate different virtual networks, achieving isolation between different users or devices.

VXLAN: VXLAN is a Layer 3 (network layer) network virtualisation technology that extends Layer 2 networks by creating virtual tunnels over an underlying IP network. VXLAN uses VXLAN Network Identifiers (VNIs) to identify different virtual networks and encapsulates original Ethernet frames within UDP packets to enable communication between virtual machines, overcoming physical network limitations.

Functionality

VLAN: VLANs primarily provide Layer 2 network segmentation and isolation, allowing a single physical network to be divided into multiple virtual networks. Different VLANs are isolated from each other, enhancing network security and manageability.

VXLAN: VXLAN not only provides Layer 2 network segmentation but also creates virtual networks over an underlying IP network, enabling extensive dynamic VM migration and inter-data centre communication. VXLAN offers greater network scalability and flexibility, making it suitable for large-scale cloud computing environments and virtualised data centres.

Application Scenarios

VLAN: VLANs are suitable for small to medium-sized network environments, commonly found in enterprise LANs. They are mainly used for organisational user segmentation, security isolation, and traffic management.

VXLAN: VXLAN is ideal for large data centre networks, especially in cloud computing environments and virtualised data centres. It supports large-scale dynamic VM migration, multi-tenant isolation, and network scalability, providing a more flexible and scalable network architecture.

These distinctions highlight how VXLAN and VLAN cater to different networking needs and environments, offering tailored solutions for varying levels of network complexity and scalability.

Enhancing Data Centres with VXLAN Technology

The application of VXLAN enhances the flexibility, efficiency, and security of data centre networks, forming a crucial part of modern data centre virtualisation. Here are some typical applications of VXLAN in data centres:

Virtual Machine Migration

VXLAN allows virtual machines to migrate freely between different physical hosts without changing IP addresses. This flexibility and scalability are vital for achieving load balancing, resource scheduling, and fault tolerance in data centres.

Multi-Tenant Isolation

By using different VNIs, VXLAN can divide a data centre into multiple independent virtual networks, ensuring isolation between different tenants. This isolation guarantees data security and privacy for tenants and allows each tenant to have independent network policies and quality of service guarantees.

Inter-Data Centre Connectivity

VXLAN can extend across multiple data centres, enabling the establishment of virtual network connections between them. This capability supports resource sharing, business expansion, and disaster recovery across data centres.

Cloud Service Providers

VXLAN helps cloud service providers build highly scalable virtualised network infrastructures. By using VXLAN, cloud service providers can offer flexible virtual network services and support resource isolation and security in multi-tenant environments.

Virtual Network Functions (VNF)

Combining VXLAN with Network Functions Virtualisation (NFV) enables the deployment and management of virtual network functions. VXLAN serves as the underlying network virtualisation technology, providing flexible network connectivity and isolation for VNFs, thus facilitating rapid deployment and elastic scaling of network functions.

Conclusion

In summary, VXLAN offers powerful scalability, flexibility, and isolation, providing new directions and solutions for the future development of data centre networks. By utilising VXLAN, data centres can achieve virtual machine migration, multi-tenant isolation, inter-data centre connectivity, and enhanced support for cloud service providers.

How FS Can Help

As an industry-leading provider of network solutions, FS offers a variety of high-performance data centre switches supporting multiple protocols, such as MLAG, EVPN-VXLAN, link aggregation, and LACP. FS switches come pre-installed with PicOS®, equipped with comprehensive SDN capabilities and the compatible AmpCon™ management software. This combination delivers a more resilient, programmable, and scalable network operating system (NOS) with lower TCO. The advanced PicOS® and AmpCon™ management platform enables data centre operators to efficiently configure, monitor, manage, and maintain modern data centre fabrics, achieving higher utilisation and reducing overall operational costs.

Register on the FS website now to enjoy customised solutions tailored to your needs, optimising your data centre for greater efficiency and benefits.

Accelerating Data Centers: FS Unveils Next-Gen 400G Solutions

As large-scale data centers transition to faster and more scalable infrastructures and with the rapid adoption of hyperscale cloud infrastructures and services, existing 100G networks fall short in meeting current demands. As the next-generation mainstream port technology, 400G significantly increases network bandwidth, enhances link utilization, and assists operators, OTT providers, and other clients in effectively managing unprecedented data traffic growth.

To meet the demand for higher data rates, FS has been actively developing a series of 400G products, including 400G switches, optical modules, cables, and network adapters.

FS 400G Switches

The emergence of 400G data center switches has facilitated the transition from 100G to 400G in data centers, providing flexibility for building large-scale leaf and spine designs while reducing the total number of network devices. This reduction can save costs and decrease power consumption. Whether it’s the powerful N9510-64D or the versatile N9550 series, FS 400G data center switches can deliver the performance and flexibility required for today’s data-intensive applications.

Of particular note is that, as open network switches, the N8550 and N9550 series switches can enhance flexibility by freely choosing preferred operating systems. They are designed to meet customer requirements by providing comprehensive support for L3 features, SONiC and Broadcom chips, and data center functionalities. Additionally, FS offers PicOS-based open network switch operating system solutions, which provide a more flexible, programmable, and scalable network operating system (NOS) at a lower total cost of ownership (TCO).

FS 400G Transceivers

FS offers two different types of packaging for its 400G transceivers: QSFP-DD and OSFP, developed to support 400G with performance as their hallmark. Additionally, FS provides CFP2 DCO transceivers for coherent transmission at various rates (100G/200G/400G) in DWDM applications. Moreover, FS has developed InfiniBand cables and transceivers to enhance the performance of HPC networks, meeting the requirements for high bandwidth, low latency, and highly reliable connections.

FS conducts rigorous testing on its 400G optical modules using advanced analytical equipment, including TX/RX testing, temperature measurement, rate testing, and spectrometer evaluation tests, to ensure the performance and compatibility of the optical modules.

FS 400G Cables

When planning 400G Ethernet cabling or connection schemes, it’s essential to choose devices with low insertion loss and good return loss to meet the performance requirements of high-density data center links. FS offers various wiring options, including DAC/AOC cables and breakout cables. FS DAC/AOC breakout cables provide three connection types to meet high-density requirements for standard and combination connector configurations: 4x100G, 2x200G, and 8x50G. Their low insertion loss and ultra-low crosstalk effectively enhance transmission performance, while their high bend flexibility offers cost-effective solutions for short links.

FS 400G Network Adapters

FS 400G network adapters utilize the industry-leading ConnectX-7 series cards. The ConnectX-7 VPI card offers a 400Gb/s port for InfiniBand, ultra-low latency, and delivers between 330 to 3.7 billion messages per second, enabling top performance and flexibility to meet the growing demands of data center applications. In addition to all existing innovative features from previous versions, the ConnectX-7 card also provides numerous enhanced functionalities to further boost performance and scalability.

FS 400G Networking Soluitons

To maximize the utilization of the 400G product series, FS offers comprehensive 400G network solutions, such as solutions tailored for upgrading from 100G to high-density 400G data centers. These solutions provide diverse and adaptable networking options customized for cloud data centers. They are designed to tackle the continuous increase in data center traffic and the growing need for high-bandwidth solutions in extensive 400G data center networks.

For more information about FS 400G products, please read FS 400G Product Family Introduction.

How FS Can Help

Register for an FS account now, choose from our range of 400G products and solutions tailored to your needs, and effortlessly upgrade your network.

Exploring FS 100G EDR InfiniBand Solutions: Powering HPC

In the realm of high-speed processing and complex workloads, InfiniBand is pivotal for HPC and hyperscale clouds. This article explores FS’s 100G EDR InfiniBand solution, emphasizing the deployment of QSFP28 EDR transceivers and cables to boost network performance.

What are the InfiniBand HDR 100G Cables and Transceivers

InfiniBand EDR 100G Active AOC Cables

The NVIDIA InfiniBand MFA1A00-E001, an active optical cable based on Class 1 FDA Laser, is designed for InfiniBand 100Gb/s EDR systems. With lengths ranging from 1m to 100m, these cables offer predictable latency, consuming a max of 3.5W, and enhancing airflow in high-speed HPC environments.

InfiniBand EDR 100G Passive Copper Cables

The NVIDIA InfiniBand MCP1600-E001E30 is available in lengths of 0.5m to 3m. With four high-speed copper pairs supporting up to 25Gb/s, it offers efficient short-haul connectivity. Featuring EEPROM on each QSFP28 port, it enhances host system communication, enabling higher port bandwidth, density, and configurability while reducing power demand in data centers.

InfiniBand EDR 100G Optical Modules

The 100Gb EDR optical modules, packaged in QSFP28 form factor with LC duplex or MTP/MPO-12 connectors, are suitable for both EDR InfiniBand and 100G Ethernet. They can be categorized into QSFP28 SR4, QSEP28 PSM4, QSFP28 CWDM4, and QSFP28 LR4 based on transmission distance requirements.

100Gb InfiniBand EDR System Scenario Applications

InfiniBand has gained widespread adoption in data centers and other domains, primarily employing the spine-leaf architecture. In data centers, transceivers and cables play a pivotal role in two key scenarios: Data Center to User and Data Center Interconnects.

For more on application scenarios, please read 100G InfiniBand EDR Solution.

Conclusion

Amidst the evolving landscape of 100G InfiniBand EDR, FS’s solution emerges as mature and robust. Offering high bandwidth, low latency, and reduced power consumption, it enables higher port density and configurability at a lower cost. Tailored for large-scale data centers, HPC, and future network expansion, customers can choose products based on application needs, transmission distance, and deployment. FS 100G EDR InfiniBand solution meets the escalating demands of modern computational workloads.

Navigating Optimal GPU-Module Ratios: Decoding the Future of Network Architecture

The market’s diverse methods for calculating the optical module-to-GPU ratio lead to discrepancies due to varying network structures. The precise number of optical modules required hinges on critical factors such as network card models, switch models, and the scalable unit count.

Network Card Model

The primary models are ConnectX-6 (200Gb/s, for A100) and ConnectX-7 (400Gb/s, for H100), with the upcoming ConnectX-8 800Gb/s slated for release in 2024.

Switch Model

MQM 9700 switches (64 channels of 400Gb/s) and MQM8700 switches (40 channels of 200Gb/s) are the main types, affecting optical module needs based on transmission rates.

Number of Units (Scalable Unit)

Smaller quantities use a two-tier structure, while larger quantities employ a three-tier structure, as seen in H100 and A100 SuperPODs.

  • H100 SuperPOD: Each unit consists of 32 nodes (DGX H100servers) and supports a maximum of 4 units to form a cluster, using a two-layer switching architecture.
  • A100 SuperPOD: Each unit consists of 20 nodes (DGX A100 servers) and supports a maximum of 7 units to form a cluster. If the number of units exceeds 5, a three-layer switching architecture is required.

Optical Module Demand Under Four Network Configurations

Projected shipments of H100 and A100 GPUs in 2023 and 2024 indicate substantial optical module demands, with a significant market expansion forecasted. The following are four application scenarios:

  • A100+ConnectX6+MQM8700 Three-layer Network: Ratio 1:6, all using 200G optical modules.
  • A100+ConnectX6+MQM9700 Two-layer Network: 1:0.75 of 800G optical modules + 1:1 of 200G optical modules.
  • H100+ConnectX7+MQM9700 Two-layer Network: 1:1.5 of 800G optical modules + 1:1 of 400G optical modules.
  • H100+ConnectX8 (yet to be released)+MQM9700 Three-layer Network: Ratio 1:6, all using 800G optical modules.

For detailed calculations regarding each scenario, you can click on this article to learn more.

Conclusion

As technology progresses, the networking industry anticipates the rise of high-speed solutions like 400G multimode optical modules. FS offers optical modules from 1G to 800G, catering to evolving network demands.

Register for an FS account, select products that suit your needs, and FS will tailor an exclusive solution for you to achieve network upgrades.

Revolutionizing Data Center Networking: From Traditional to Advanced Architectures

As businesses upgrade their data centers, they’re transitioning from traditional 2-layer network architectures to more advanced 3-layer routing frameworks. Protocols like OSPF and BGP are increasingly used to manage connectivity and maintain network reliability. However, certain applications, especially those related to virtualization, HPC, and storage, still rely on 2-layer network connectivity due to their specific requirements.

VXLAN Overlay Network Virtualization

In today’s fast-paced digital environment, applications are evolving to transcend physical hardware and networking constraints. An ideal networking solution offers scalability, seamless migration, and robust reliability within a 2-layer framework. VXLAN tunneling technology has emerged as a key enabler, constructing a virtual 2-layer network on top of the existing 3-layer infrastructure. Control plane protocols like EVPN synchronize network states and tables, fulfilling contemporary business networking requirements.

Network virtualization divides a single physical network into distinct virtual networks, optimizing resource use across data center infrastructure. VXLAN, utilizing standard overlay tunneling encapsulation, extends the control plane using the BGP protocol for better compatibility and flexibility. VXLAN provides a larger namespace for network isolation across the 3-layer network, supporting up to 16 million networks. EVPN disseminates layer 2 MAC and layer 3 IP information, enabling communication between VNIs and supporting both centralized and distributed deployment models.

For enhanced flexibility, this project utilizes a distributed gateway setup, supporting agile execution and deployment processes. Equal-Cost Multipath (ECMP) routing and other methodologies optimize resource utilization and offer protection from single node failures.

RoCE over EVPN-VXLAN

RoCE technology facilitates efficient data transfer between servers, reducing CPU overhead and network latency. Integrating RoCE with EVPN-VXLAN enables high-throughput, low-latency network transmission in high-performance data center environments, enhancing scalability. Network virtualization divides physical resources into virtual networks tailored to distinct business needs, allowing for agile resource management and rapid service deployment.

Simplified network planning, deployment, and operations are essential for managing large-scale networks efficiently. Unnumbered BGP eliminates the need for complex IP address schemes, improving efficiency and reducing operational risks. Real-time fault detection tools like WJH provide deep network insights, enabling quick resolution of network challenges.

Conclusion

Essentially, recent advancements in data center networking focus on simplifying network design, deployment, and management. Deploying technological solutions such as Unnumbered BGP eliminates the need for complex IP address schemes, reducing setup errors and boosting productivity. Tools like WJH enable immediate fault detection, providing valuable network insights and enabling quick resolution of network issues. The evolution of data center infrastructures is moving towards distributed and interconnected multi-data center configurations, requiring faster network connections and improving overall service quality for users.

For detailed information on EVPN-VXLAN and RoCE, you can read: Optimizing Data Center Networks: Harnessing the Power of EVPN-VXLAN, RoCE, and Advanced Routing Strategies.

HPC and Future Networks: Architectures, Technologies, and Innovations

High-Performance Computing (HPC) has become a crucial tool for solving complex problems and pushing the boundaries of scientific research, and various other applications. However, efficient operation of HPC systems requires specialized infrastructure and support. HPC has emerged as an indispensable tool across various domains, capable of addressing complex challenges and driving innovation in fields such as science, meteorology, finance, and healthcare.

Understanding the importance of data centers in supporting HPC is essential, as knowing the three fundamental components—compute, storage, and networking—that constitute high-performance computing systems is crucial.

Facilities in High-Performance Computing

Intensive computations in HPC environments generate substantial heat, necessitating advanced cooling solutions. Efficient cooling prevents overheating, ensuring system stability and prolonging hardware lifespan. Supporting HPC, data centers employ cutting-edge cooling facilities, including liquid cooling systems and precision air conditioning. Moreover, data center architects explore innovative cooling technologies like immersion cooling, submerging servers in special liquids for effective heat dissipation.

Success in HPC data centers relies on a range of specialized equipment tailored to meet the unique demands of high-performance computing. Key components include data center switches, server network cards, high-speed optical modules, DAC and AOC cables, and power supplies.

The Growing Demand for Network Infrastructure in High-Performance Computing

With revolutionary technologies like 5G, big data, and the Internet of Things (IoT) permeating various aspects of society, the trajectory towards an intelligent, digitized society over the next two to three decades is inevitable. Data center computing power has become a powerful driving force, shifting focus from resource scale to computational scale.

To meet the ever-growing demand for computing power, high-performance computing (HPC) has become a top priority, especially as computational cluster scales expand from the petascale to the exascale. This shift imposes increasingly higher demands on interconnect network performance, marking a clear trend of deep integration between computation and networking. HPC introduces different network performance requirements in three typical scenarios: loosely coupled computing scenarios, tightly coupled scenarios, and data-intensive computing scenarios.

In summary, high-performance computing (HPC) imposes stringent requirements on network throughput and latency. To meet these demands, the industry widely adopts Remote Direct Memory Access (RDMA) as an alternative to the TCP protocol to reduce latency and maximize CPU utilization on servers. Despite its advantages, the sensitivity of RDMA to network packet loss highlights the importance of lossless networks.

The Evolution of High-Performance Computing Networks

Traditional data center networks have historically adopted a multi-hop symmetric architecture based on Ethernet technology, relying on the TCP/IP protocol stack for transmission. However, despite over 30 years of development, Remote Direct Memory Access (RDMA) technology has gradually replaced TCP/IP, becoming the preferred protocol for HPC networks. Additionally, the choice of RDMA network layer protocols has evolved from expensive lossless networks based on the InfiniBand (IB) protocol to intelligent lossless networks based on Ethernet.

From TCP to RDMA

In traditional data centers, Ethernet technology and the TCP/IP protocol stack have been the norm for building multi-hop symmetric network architectures. However, due to two main limitations—latency issues and CPU utilization—the TCP/IP network is no longer sufficient to meet the demands of high-performance computing. To address these challenges, RDMA functionality has been introduced at the server side. RDMA is a direct memory access technology that enables data transfer directly between computer memories without involving the operating system, thus bypassing time-consuming processor operations. This approach achieves high bandwidth, low latency, and low resource utilization.

From IB to RoCE

RDMA enables direct data read and write between applications and network cards. RDMA’s zero-copy mechanism allows the receiving end to read data directly from the sending end’s memory, significantly reducing CPU burden and improving CPU efficiency. Currently, there are three choices for RDMA network layer protocols: InfiniBand, iWARP (Internet Wide Area RDMA Protocol), and RoCE (RDMA over Converged Ethernet). Although RoCE offers many advantages, its sensitivity to packet loss requires support from lossless Ethernet. This evolution of HPC networks reflects a continuous pursuit of enhanced performance, efficiency, and interoperability.

Enterprise Innovative Solution: Designing High-Performance Data Center Networks

The architecture of data center networks has evolved from the traditional core-aggregation-access model to the modern Spine-Leaf design. This approach fully utilizes network interconnection bandwidth, reduces multi-layer convergence rates, and is easy to scale. When traffic bottlenecks occur, horizontal expansion can be achieved by increasing uplink links and reducing convergence ratios, minimizing the impact on bandwidth expansion. Overlay networks utilize EVPN-VXLAN technology to achieve flexible network deployment and resource allocation.

This solution draws on the design experience of internet data center networks, adopting the Spine-Leaf architecture and EVPN-VXLAN technology to provide a versatile and scalable network infrastructure for upper-layer services. Production and office networks are isolated by domain firewalls and connected to office buildings, labs, and regional center exits. The core switches of the production network provide up to 1.6Tb/s of inter-POD communication bandwidth and 160G of high-speed network egress capacity, with each POD’s internal horizontal network capacity reaching 24Tb, ensuring minimal packet loss. The building wiring is planned based on the Spine-Leaf architecture, with each POD’s switches interconnected using 100G links and deployed in TOR mode. The overall network structure is more streamlined, improving cable deployment and management efficiency.

Future-Oriented Equipment Selection

When envisioning and building data center networks, careful consideration of technological advancements, industry trends, and operational costs over the next five years is crucial. The choice of network switches plays a vital role in the overall design of data center networks. Traditional large-scale network designs often opt for chassis-based equipment to enhance the overall capacity of the network system, but scalability is limited.

Therefore, for the network equipment selection of this project, NVIDIA strongly advocates for adopting a modular switch network architecture. This strategic approach facilitates rapid familiarization by maintenance teams. Additionally, it provides operational flexibility for future network architecture adjustments, equipment reuse, and maintenance replacements.

In response to the ongoing trend of business transformation and the surge in demand for big data, most data center network designs adopt the mature Spine-Leaf architecture, coupled with EVPN-VXLAN technology to achieve efficient network virtualization. This architectural approach ensures convenient high-bandwidth, low-latency network traffic, laying the foundation for scalability and flexibility.

How FS Can Help

FS is a professional provider of communication and high-speed network system solutions for network, data center, and telecommunications customers. Leveraging NVIDIA® InfiniBand switches, 100G/200G/400G/800G InfiniBand transceivers, and NVIDIA® InfiniBand adapters, FS offers customers a comprehensive set of solutions based on InfiniBand and lossless Ethernet (RoCE). These solutions meet diverse application requirements, enabling users to accelerate their businesses and enhance performance. For more information, please visit FS.COM.

Empowering Your 800G Networks with MTP/MPO Fiber Cables

In the era of ultra-high-speed data transmission, MTP/MPO cables have become a key player, especially in the context of 800G networks. In essence, MTP/MPO cables emerge as catalysts for the evolution toward 800G networks, offering a harmonious blend of high-density connectivity, reliability, and scalability. This article will delve into the advantages of MTP/MPO cables in 800G networks and provide specific solutions for constructing an 800G network, offering valuable insights for upgrading your existing data center.

Challenges Faced in 800G Data Transmission

As a critical hub for storing and processing vast amounts of data, data centers require high-speed and stable networks to support data transmission and processing. The 800G network achieves a data transfer rate of 800 Gigabits per second (Gbps) and can meet the demands of large-scale data transmission and processing in data centers, enhancing overall efficiency.

Therefore, many major internet companies are either constructing new 800G data centers or upgrading existing data centers from 100G, 400G to 800G speeds. However, the pursuit of 800G data transmission faces numerous complex challenges that necessitate innovative solutions. Here, we analyze the intricate obstacles associated with achieving ultra-fast data transmission.

Insufficient Bandwidth & High Latency

The 800G network demands extensive data transmission, placing higher requirements on bandwidth. It necessitates network equipment capable of supporting greater data throughput, particularly in terms of connection cables. Ordinary optical fibers typically consist of a single fiber within a cable, and their optical and physical characteristics are inadequate for handling massive data, failing to meet the high-bandwidth requirements of 800G.

While emphasizing high bandwidth, data center networks also require low latency to meet end-user experience standards. In high-speed networks, ordinary optical fibers undergo more refraction and scattering, resulting in additional time delays during signal transmission.

Limited Spatial Layout

The high bandwidth requirements of 800G networks typically come with more connection ports and optical fibers. However, the limited space in data centers or server rooms poses a challenge. Achieving high-density connections requires accommodating more connection devices in the constrained space, leading to crowded layouts and increased challenges in space management and design.

Complex Network Architecture

The transition to an 800G network necessitates a reassessment of network architecture. Upgrading to higher data rates requires consideration of network design, scalability, and compatibility with existing infrastructure. Therefore, the cabling system must meet both current usage requirements and align with future development trends. Given the long usage lifecycle of cabling systems, addressing how to match the cabling installation with multiple IT equipment update cycles becomes a challenging problem.

High Construction Cost

Implementing 800G data transmission involves investments in infrastructure and equipment. Achieving higher data rates requires upgrading and replacing existing network equipment and cabling management patterns, incurring significant costs. Cables, in particular, carry various network devices, and their required lifecycle is longer than that of network equipment. Frequent replacements can result in resource wastage.

Effectively addressing these challenges is crucial to unlocking the full potential of a super-fast, efficient data network.

Unlocking 800G Power: MTP/MPO Cables’ Key Advantages

The significance of MTP/MPO cables in high-speed networks, especially in 800G networks, lies in their ability to manage the escalating data traffic efficiently. The following are key advantages of MTP/MPO cables:

High Density, High Bandwidth

MTP/MPO cables adopt a high-density multi-fiber design, enabling the transmission of multiple fibers within a relatively small connector. This design not only provides ample bandwidth support for data centers, meeting the high bandwidth requirements of an 800G network, but also helps save space and supports the high-density connection needs for large-scale data transfers.

Additionally, MTP/MPO cables exhibit excellent optical and mechanical performance, resulting in low insertion loss in high-speed network environments. By utilizing a low-loss cabling solution, they effectively contribute to reducing latency in the network.

Flexibility and Scalability

MTP/MPO connectors come in various configurations, accommodating different fiber counts (8-core, 12-core, 16-core, 24-core, etc.), supporting both multimode and single-mode fibers. With trunk and breakout designs, support for different polarities, and male/female connector options, these features allow seamless integration into various network architectures. The flexibility and scalability of MTP/MPO connectors enable them to adapt to evolving network requirements and facilitate future expansions, particularly in the context of 800G networks.

Efficient Maintenance

The high-density and compact design of MTP/MPO cables contribute to saving rack and data room space, enabling data centers to utilize limited space resources more efficiently. This, in turn, facilitates the straightforward deployment and reliable operation of 800G networks, reducing the risks associated with infrastructure changes or additions in terms of cost and performance. Additionally, MTP/MPO cables featuring a Plenum (OFNP) outer sheath exhibit fire resistance and low smoke characteristics, minimizing potential damage and saving on cabling costs.

Scaling the 800G Networks With MTP/MPO Cables

In the implementation of 800G data transmission, the wiring solution is crucial. MTP/MPO cables, as a key component, provide reliable support for high-speed data transmission. FS provides professional solutions for large-scale data center users who require a comprehensive upgrade to 800G speeds. Aim to rapidly increase data center network bandwidth to meet the growing demands of business.

Newly Built 800G Data Center

Given the rapid expansion of business, many large-scale internet companies choose to build new 800G data centers to enhance their network bandwidth. In these data centers, all network equipment utilizes 800G switches, combined with MTP/MPO cables to achieve a direct-connected 800G network. To ensure high-speed data transmission, advanced 800G 2xFR4/2xLR4 modules are employed between the core switches and backbone switches, and 800G DR8 modules seamlessly interconnect leaf switches with TOR switches.

To simplify connections, a strategic deployment of the 16-core MTP/MPO OS2 trunk cables directly connects to 800G optical modules. This strategic approach maximally conserves fiber resources, optimizes wiring space, and facilitates cable management, providing a more efficient and cost-effective cabling solution for the infrastructure of 800G networks.

Upgrade from 100G to 800G

Certainly, many businesses choose to renovate and upgrade their existing data center networks. In the scenario below, engineers replaced the original 8-core MTP/MPO-LC breakout cable with the 16-core version, connecting it to the existing MTP cassettes. The modules on both ends, previously 100G QSFP28 FR, were upgraded to 800G OSFP XDR8. This seamless deployment migrated the existing structured cabling to an 800G rate. It is primarily due to the 16-core MTP/MPO-LC breakout cable, proven as the optimal choice for direct connections from 800G OSFP XDR8 to 100G QSFP28 FR or from 800G QSFP-DD/OSFP DR8 to 100G QSFP28 DR.

In short, this solution aims to increase the density of fiber optic connections in the data center and optimize cabling space. Not only improves current network performance but also takes into account future network expansion.

Elevating from 400G to the 800G Network

How to upgrade an existing 400G network to 800G in data centres? Let’s explore the best practices through MTP/MPO cables to achieve this goal.

Based on the original 400G network, the core, backbone, and leaf switches have all been upgraded to an 800G rate, while the TOR (Top of Rack) remains at a 400G rate. The core and backbone switches utilise 800G 2xFR4/2xLR4 modules, the leaf switches use 800G DR8 modules, and the TOR adopts 400G DR4 modules. Deploying two 12-core MTP/MPO OS2 trunk cables in a breakout configuration between the 400G and 800G optical modules facilitates interconnection.

Furthermore, there is a second connectivity option where the 800G port optical module utilises OSFP SR8, the 400G port uses OSFP SR4 optical module, and the intermediate cables are connected using 12-core MTP® OM4 trunk cables.

These two cabling solutions enhance scalability, prevent network bottlenecks, reduce latency, and are conducive to expanding bandwidth when transitioning from lower-speed to higher-speed networks in the future. Additionally, this deployment retains the existing network equipment, significantly lowering cost expenditures.

ItemProductDescription
1OSFP-DR8-800GNVIDIA InfiniBand MMS4X00-NM compatible OSFP 800G DR8 PAM4 2x DR4 1310nm 500m DOM dual MPO-12/APC NDR SMF optical transceiver, finned top.
2OSFP800-XDR8-B1Generic compatible 800GBASE-XDR8 OSFP PAM4 1310nm 2km DOM MTP/MPO-16 SMF optical transceiver module.
3OSFP-2FR4-800GNVIDIA InfiniBand MMS4X50-NM compatible OSFP 800G 2FR4 PAM4 1310nm 2km DOM dual LC duplex/UPC NDR SMF optical transceiver, finned top.
4OSFP-SR8-800GNVIDIA InfiniBand MMA4Z00-NS compatible OSFP 800G SR8 PAM4 2 x SR4 850nm 50m DOM dual MPO-12/APC MMF NDR finned top optical transceiver module for QM9790/9700 switches.
5OSFP-SR4-400G-FLNVIDIA InfiniBand MMA4Z00-NS400 compatible OSFP 400G SR4 PAM4 850nm 50m DOM MPO-12/APC MMF NDR flat top optical transceiver module for ConnectX-7 HCA.
616FMTPSMFMTP®-16 APC (Female) to MTP®-16 APC (Female) OS2 single mode standard IL trunk cable, 16 fibers, plenum (OFNP), yellow, for 800G network connection.
716FMTPLCSMFMTP®-16 APC (Female) to 8 LC UPC duplex OS2 single mode standard IL breakout cable, 16 Fibers, plenum (OFNP), yellow, for 800G network connection.
812FMTPSMFMTP®-12 (Female) to MTP®-12 (Female) OS2 single mode elite trunk cable, 12 fibers, type B, plenum (OFNP), yellow.
912FMTPOM4MTP®-12 APC (Female) to MTP®-12 APC (Female) OM4 multimode elite trunk cable, 12 fibers, type B, plenum (OFNP), magenta.

For more specific 800G connectivity solutions, please refer to 800G MTP/MPO Cabling Guide.

Conclusion

Ultimately, the diverse range of MTP/MPO cable types provides tailored solutions for different connectivity scenarios in 800G networks. As organizations navigate the complexities of high-speed data transmission, MTP/MPO cables stand as indispensable enablers, paving the way for a new era of efficient and robust network infrastructures.

How FS Can Help

The comprehensive networking solutions and product offerings not only save costs but also reduce power consumption, delivering higher value. Considering an upgrade to 800G for your data center network? FS tailors customized solutions for you. Don’t wait any longer—Register as an FS website member now and enjoy free technical support.