Network Virtualisation: NVGRE vs. VXLAN Explained

The rise of virtualisation technology has revolutionised data centres, enabling the operation of multiple virtual machines on the same physical infrastructure. However, traditional data centre network designs are not well-suited to these new applications, necessitating a new approach to address these challenges. NVGRE and VXLAN were created to meet this need. This article delves into NVGRE and VXLAN, exploring their differences, similarities, and advantages in various scenarios.

Unleashing the Power of NVGRE Technology

NVGRE (Network Virtualization using Generic Routing Encapsulation) is a network virtualisation method designed to overcome the limitations of traditional VLANs in complex virtual environments.

How It Works

NVGRE encapsulates data packets by adding a Tenant Network Identifier (TNI) to the packet, transmitting it over existing IP networks, and then decapsulating and delivering it on the target host. This enables large-scale virtual networks to be more flexible and scalable on physical infrastructure.

1.Tenant Network Identifier (TNI)

NVGRE introduces a 24-bit TNI to identify different virtual networks or tenants. Each TNI corresponds to a separate virtual network, allowing multiple virtual networks to operate on the same physical infrastructure without interference.

2. Packet Encapsulation

Source MAC Address: The MAC address of the sending VM.

Destination MAC Address: The MAC address of the receiving VM.

TNI: The 24-bit virtual network identifier.

Original Ethernet Frame: Includes the source MAC address, destination MAC address, Ethernet protocol type (usually IPv4 or IPv6), etc.

Data packets are encapsulated into NVGRE packets for communication between VMs.

3. Transport Network

NVGRE packets are transmitted over existing IP networks, including physical or virtual networks. The IP header information is used for routing, while the TNI identifies the target virtual network.

4. Decapsulation

When NVGRE packets reach the host of the target VM, the host decapsulates them, extracting the original Ethernet frame and delivering it to the target VM.

5. MAC Address Table Maintenance

NVGRE hosts maintain a MAC address table to map VM MAC addresses to TNIs. When a host receives an NVGRE packet, it looks up the MAC address table to determine which VM to deliver the packet to.

6. Broadcast and Multicast Support

NVGRE uses broadcast and multicast to support communication within virtual networks, allowing VMs to perform broadcast and multicast operations for protocols like ARP and Neighbor Discovery.

Features

  • Network Virtualisation Goals: NVGRE aims to provide a larger number of VLANs for multi-tenancy and load balancing, overcoming the limited VLAN capacity of traditional networks.
  • Encapsulation and Tunneling: Uses encapsulation and tunneling to isolate virtual networks, making VM communication appear direct without considering the underlying physical network.
  • Cross-Data Centre Scalability: Designed to support cross-location virtual networks, ideal for distributed data centre architectures.

A Comprehensive Look at VXLAN Technology

VXLAN (Virtual Extensible LAN) is a network virtualisation technology designed to address the shortage of virtual networks in large cloud data centres.

How It Works

VXLAN encapsulates data packets by adding a Virtual Network Identifier (VNI), transmitting them over existing IP networks, and then decapsulating and delivering them on the target host.

1.Virtual Network Identifier (VNI)

VXLAN introduces a 24-bit VNI to distinguish different virtual networks. Each VNI represents a separate virtual network, allowing multiple virtual networks to operate on the same physical infrastructure without interference.

2.Packet Encapsulation

Source IP Address: The IP address of the sending VM.

Destination IP Address: The IP address of the receiving VM.

UDP Header: Contains source and destination port information to identify VXLAN packets.

VNI: The 24-bit virtual network identifier.

Original Ethernet Frame: Includes the source MAC address, destination MAC address, Ethernet protocol type, etc.

Data packets are encapsulated into VXLAN packets for communication between VMs.

3.Transport Network
VXLAN packets are transmitted over existing IP networks. The IP header information is used for routing, while the VNI identifies the target virtual network.

4.Decapsulation
When VXLAN packets reach the host of the target VM, the host decapsulates them, extracting the original Ethernet frame and delivering it to the target VM.

5.MAC Address Table Maintenance
VXLAN hosts maintain a MAC address table to map VM MAC addresses to VNIs. When a host receives a VXLAN packet, it looks up the MAC address table to determine which VM to deliver the packet to.

6.Broadcast and Multicast Support
VXLAN uses multicast to simulate broadcast and multicast behaviour within virtual networks, supporting protocols like ARP and Neighbor Discovery.

Features

  • Expanded VLAN Address Space: Extends VLAN identifier capacity from 4096 to 16 million with a 24-bit segment ID.
  • Virtual Network Isolation: Allows multiple virtual networks to coexist on the same infrastructure, each with a unique segment ID.
  • Multi-Tenancy Support: Ideal for environments where different tenants need isolated virtual networks.
  • Layer 2 and 3 Extension: Supports complex network topologies and routing configurations.
  • Industry Support: Widely supported by companies like Cisco, VMware, and Arista Networks.

NVGRE vs VXLAN: Uncovering the Best Virtualization Tech

NVGRE and VXLAN are both technologies for virtualising data centre networks, aimed at addressing issues in traditional network architectures such as isolation, scalability, and performance. While their goals are similar, they differ in implementation and several key aspects.

Supporters and Transport Protocols

NVGRE is supported mainly by Microsoft, using GRE as the transport protocol. VXLAN is driven by Cisco, using UDP.

Packet Format

VXLAN packets have a 24-bit VNI for 16 million virtual networks. NVGRE uses the GRE header’s lower 24 bits as the TNI, also supporting 16 million virtual networks.

Transmission Method

VXLAN uses multicast to simulate broadcast and multicast for MAC address learning and discovery. NVGRE uses multiple IP addresses for enhanced load balancing without relying on flooding and IP multicast.

Fragmentation

NVGRE supports fragmentation to manage MTU sizes, while VXLAN typically requires the network to support jumbo frames and does not support fragmentation.

Conclusion

VXLAN and NVGRE represent significant advancements in network virtualisation, expanding virtual network capacity and enabling flexible, scalable, and high-performance cloud and data centre networks. With support from major industry players, these technologies have become essential for building agile virtualised networking environments.

How FS Can Help

FS offers a wide range of data centre switches, from 1G to 800G, to meet various network requirements and applications. FS switches support VXLAN EVPN architectures and MPLS forwarding, with comprehensive protocol support for L3 unicast and multicast routing, including BGP, OSPF, EIGRP, RIPv2, PIM-SM, SSM, and MSDP. Explore FS high-quality switches and expert solutions tailored to enhance your network at the FS website.

Stacking Technology vs MLAG Technology: What Sets Apart?

Businesses are growing and networks are becoming more complex. Single-device solutions are having trouble meeting the high availability and performance requirements of modern data centres. To address this, two horizontal virtualisation technologies have emerged: Stacking and Multichassis Link Aggregation Group (MLAG). This article compares Stacking and MLAG. It discusses their principles, features, advantages, and disadvantages. This comparison can help you choose the best option for your network environment.

Understanding Stacking Technology

Stacking technology involves combining multiple stackable devices into a single logical unit. Users can control and use multiple devices together, increasing ports and switching abilities while improving reliability with mutual backup between devices.

Advantages of Stacking:

  • Simplified Management: Managed via a single IP address, reducing management complexity. Administrators can configure and monitor the entire stack from one interface.
  • Increased Port Density: Combining multiple switches offers more ports, meeting the demands of large-scale networks.
  • Seamless Redundancy: If one stack member fails, others seamlessly take over, ensuring high network availability.
  • Enhanced Performance: Increased interconnect bandwidth among switches improves data exchange efficiency and performance.

Unlocking the Power of MLAG Technology

Multichassis Link Aggregation Group (MLAG) is a newer cross-device link aggregation technology. It allows two access switches to negotiate link aggregation as if they were one device. This cross-device link aggregation enhances reliability from the single-board level to the device level, making MLAG suitable for modern network topologies requiring redundancy and high availability.

Advantages of MLAG:

  • High Availability: Increases network availability by allowing smooth traffic transition between switches in case of failure. There are no single points of failure at the switch level.
  • Improved Bandwidth: Aggregating links across multiple switches significantly increases accessible bandwidth, beneficial for high-demand environments.
  • Load Balancing: Evenly distributes traffic across member links, preventing overloads and maximising network utilisation.
  • Compatibility and Scalability: Better compatibility and scalability, able to negotiate link aggregation with devices from different vendors.

Stacking vs. MLAG: Which Network Virtualisation Tech Reigns Supreme?

Both Stacking and MLAG are crucial for achieving redundant access and link redundancy, significantly enhancing the reliability and scalability of data centre networks. Despite their similarities, each has distinct advantages, disadvantages, and suitable application scenarios. Understanding the concepts and advantages of Stacking and MLAG is crucial. Here’s a detailed comparison to help you distinguish between the two:

Reliability

Stacking: Centralised control plane shared by all switches, with the master switch managing the stack. Failure of the master switch can affect the entire system despite backup switches.

MLAG: Each switch operates with an independent control plane. Consequently, the failure of one switch does not impact the functionality of the other, effectively isolating fault domains and enhancing overall network reliability.

Configuration Complexity

Stacking: Appears as a single device logically, simplifying configuration and management.

MLAG: Requires individual configuration of each switch but can be simplified with modern management tools and automation scripts.

Cost

Stacking: Requires specialised stacking cables, adding hardware costs.

MLAG: Requires peer-link cables, which incur costs comparable to stacking cables.

Performance

Stacking: Performance may be limited by the master switch’s CPU load, affecting overall system performance.

MLAG: Each switch independently handles data forwarding, distributing CPU load and enhancing performance.

Upgrade Complexity

Stacking: Higher upgrade complexity, needing synchronised upgrades of all member devices, with longer operation times and higher risks.

MLAG: Lower upgrade complexity, allowing independent upgrades of each device, reducing complexity and risk.

Upgrade Downtime

Stacking: The duration of downtime varies between 20 seconds and 1 minute, contingent upon the traffic load.

MLAG: Minimal downtime, usually within seconds, with negligible impact.

Network Design

Stacking: Simpler design, appearing as a single device, easier to manage and design.

MLAG: More complex design, logically still two separate devices, requiring more planning and management.

Enhancing Display Networks: Stacking vs. MLAG Applications

This section explains how these technologies are used in real-world situations after learning about Stacking and MLAG differences. This will help you make informed decisions when setting up a network.

Stacking is suitable for small to medium-sized network environments that require simplified management and configuration and enhanced redundancy. It is widely used in enterprise campus networks and small to medium-sized data centres.

MLAG, on the other hand, is ideal for large data centres and high-density server access environments that require high availability and high performance. It offers redundancy and load balancing across devices. The choice between these technologies depends on the specific needs, scale, and complexity of your network.

In practical situations, Stacking and MLAG technologies can be combined to take advantage of their strengths. This creates a synergistic effect that is stronger than each technology individually. Stacking technology simplifies the network topology, increasing bandwidth and fault tolerance. MLAG technology provides redundancy and load balancing, enhancing network availability.

Therefore, consider integrating Stacking and MLAG technologies to achieve better network performance and reliability when designing and deploying enterprise networks.

Conclusion

Both Multichassis Link Aggregation (MLAG) and stackable switches offer unique advantages in modern network architectures. MLAG ensures backup and reliability with cross-switch link aggregation. Stackable switches allow for easy management and scalability by acting as one unit. Understanding the specific requirements and use cases of each technology is essential for designing resilient and efficient network infrastructures.

How FS Can Help

FS, a trusted global ICT products and solutions provider, offers a range of data centre switches to meet diverse enterprise needs. FS data centre switches support a variety of features and protocols, including stacking, MLAG, and VXLAN, making them suitable for diverse network construction. Customised solutions tailored to your requirements can assist with network upgrades. Visit the FS website to explore products and solutions that can help you build a high-performance network today.

VXLAN VS. MPLS: From Data Centre to Metropolitan Area Network

In recent years, the advancement of cloud computing, virtualisation, and containerisation technologies has driven the adoption of network virtualisation. Both MPLS and VXLAN leverage virtualisation concepts to create logical network architectures, enabling more complex and flexible domain management. However, they serve different purposes. This article will compare VXLAN and MPLS, explaining why VXLAN is more popular than MPLS in metropolitan and wide area networks.

Understanding VXLAN and MPLS: Key Concepts Unveiled

VXLAN

Virtual Extensible LAN (VXLAN) encapsulates Layer 2 Ethernet frames within Layer 3 UDP packets, enabling devices and applications to communicate over a large physical network as if they were on the same Layer 2 Ethernet network. VXLAN technology uses the existing Layer 3 network as an underlay to create a virtual Layer 2 network, known as an overlay. As a network virtualisation technology, VXLAN addresses the scalability challenges associated with large-scale cloud computing setups and deployments.

MPLS

Multi-Protocol Label Switching (MPLS) is a technology that uses labels to direct data transmission quickly and efficiently across open communication networks. The term “multi-protocol” indicates that MPLS can support various network layer protocols and is compatible with multiple Layer 2 data link layer technologies. This technology simplifies data transmission between two nodes by using short path labels instead of long network addresses. MPLS allows the addition of more sites with minimal configuration. It is also independent of IP, merely simplifying the implementation of IP addresses. MPLS over VPN adds an extra layer of security since MPLS itself lacks built-in security features.

Data Centre Network Architecture Based on MPLS

MPLS Layer 2 VPN (L2VPN) provides Layer 2 connectivity across a Layer 3 network, but it requires all routers in the network to be IP/MPLS routers. Virtual networks are isolated using MPLS pseudowire encapsulation and can stack MPLS labels, similar to VLAN tag stacking, to support a large number of virtual networks.

IP/MPLS is commonly used in telecom service provider networks, so many service providers’ L2VPN services are implemented using MPLS. These include point-to-point L2VPN and multipoint L2VPN implemented according to the Virtual Private LAN Service (VPLS) standard. These services typically conform to the MEF Carrier Ethernet service definitions of E-Line (point-to-point) and E-LAN (multipoint).

Because MPLS and its associated control plane protocols are designed for highly scalable Layer 3 service provider networks, some data centre operators have adopted MPLS L2VPN in their data centre networks to overcome the scalability and resilience limitations of Layer 2 switched networks, as shown in the diagram.

Why is VXLAN Preferred Over MPLS in Data Centre Networks?

Considering the features and applications of both technologies, the following points summarise why VXLAN is more favoured:

Cost of MPLS Routers

For a long time, some service providers have been interested in building cost-effective metropolitan networks using data centre-grade switches. Over 20 years ago, the first generation of competitive metro Ethernet service providers, like Yipes and Telseon, built their networks using the most advanced gigabit Ethernet switches available in enterprise networks at the time. However, such networks struggled to provide the scalability and resilience required by large service providers (SPs). Consequently, most large SPs shifted to MPLS (as shown in the diagram below). However, MPLS routers are more expensive than ordinary Ethernet switches, and this cost disparity has persisted over the decades. Today, data centre-grade switches combined with VXLAN overlay architecture can largely eliminate the shortcomings of pure Layer 2 networks without the high costs of MPLS routing, attracting a new wave of SPs.

Tight Coupling Between Core and Edge

MPLS-based VPN solutions require tight coupling between edge and core devices, meaning every node in the data centre network must support MPLS. In contrast, VXLAN only requires a VTEP (VXLAN Tunnel Endpoint) in edge nodes (e.g., leaf switches) and can use any IP-capable device or IP transport network to implement data centre spine and data centre interconnect (DCI).

MPLS Expertise

Outside of large service providers, MPLS technology is challenging to learn, and relatively few network engineers can easily build and operate MPLS-based networks. VXLAN, being simpler, is becoming a fundamental technology widely mastered by data centre network engineers.

Advancements in Data Centre Switching Technology

Modern data centre switching chips have integrated numerous functions that make metro networks based on VXLAN possible. Here are two key examples:

  • Hardware-based VTEP supporting line-rate VXLAN encapsulation.
  • Expanded tables providing the routing and forwarding scale required to create resilient, scalable Layer 3 underlay networks and multi-tenant overlay services.

Additionally, newer data centre-grade switches have powerful CPUs capable of supporting advanced control planes crucial for extended Ethernet services, whether it’s BGP EVPN (a protocol-based approach) or an SDN-based protocol-less control plane. Therefore, in many metro network applications, specialised (and thus high-cost) routing hardware is no longer necessary.

VXLAN Overlay Architecture for Metropolitan and Wide Area Networks

Overlay networks have been widely adopted in various applications such as data centre networks and enterprise SD-WAN. A key commonality among these overlay networks is their loose coupling with the underlay network. Essentially, as long as the network provides sufficient capacity and resilience, the underlay network can be constructed using any network technology and utilise any control plane. The overlay is only defined at the service endpoints, with no service provisioning within the underlay network nodes.

One of the primary advantages of SD-WAN is its ability to utilise various networks, including broadband or wireless internet services, which are widely available and cost-effective, providing sufficient performance for many users and applications. When VXLAN overlay is applied to metropolitan and wide area networks, similar benefits are also realised, as depicted in the diagram.

When building a metropolitan network to provide services like Ethernet Line (E-Line), Multipoint Ethernet Local Area Network (E-LAN), or Layer 3 VPN (L3VPN), it is crucial to ensure that the Underlay can meet the SLA (Service Level Agreement) requirements for such services.

VXLAN-Based Metropolitan Network Overlay Control Plane Options

So far, our focus has mainly been on the advantages of VXLAN over MPLS in terms of network architecture and capital costs, i.e., the advantages of the data plane. However, VXLAN does not specify a control plane, so let’s take a look at the Overlay control plane options.

The most prominent control plane option for creating VXLAN Overlay and providing Overlay services should be BGP EVPN, which is a protocol-based approach that requires service configuration in each edge node. The main drawback of BGP EVPN is the complexity of operations.

Another protocol-less approach is using SDN and services defined in an SDN controller to programme the data plane of each edge node. This approach eliminates much of the operational complexity of protocol-based BGP EVPN. Nonetheless, the centralised SDN controller architecture, suitable for single-site data centre architectures, presents significant scalability and resilience issues when implemented in metropolitan and wide area networks. As a result, it’s unclear whether it’s a superior alternative to MPLS for metropolitan networks.

There’s also a third possibility—decentralised or distributed SDN, in which the SDN controller’s functionality is duplicated and spread across the network. This can also be referred to as a “controller-less” SDN because it doesn’t necessitate a separate controller server/device, thereby completely resolving the scalability and resilience problems associated with centralised SDN control while maintaining the advantages of simplified and expedited service configuration.

Deployment Options

Due to VXLAN’s ability to decouple Overlay services delivery from the Underlay network, it creates deployment options that MPLS cannot match, such as virtual service Overlays on existing IP infrastructure, as shown in the diagram. VXLAN-based switch deployments at the edge of existing networks, scalable according to business requirements, allow for the addition of new Ethernet and VPN services and thus generate new revenue without altering the existing network.

VXLAN Overlay Deployment on Existing Metropolitan Networks

The metropolitan network infrastructure shown in Figure 2 can support all services offered by an MPLS-based network, including commercial internet, Ethernet and VPN services, as well as consumer triple-play services. Moreover, it completely eliminates the costs and complexities associated with MPLS.

Converged Metropolitan Core with VXLAN Service Overlay

Conclusion

VXLAN has become the most popular overlay network virtualization protocol in data centre network architecture, surpassing many alternative solutions. When implemented with hardware-based VTEPs in switches and DPUs, and combined with BGP EVPN or SDN control planes and network automation, VXLAN-based overlay networks can provide the scalability, agility, high performance, and resilience required for distributed cloud networks in the foreseeable future.

How FS Can Help

FS is a trusted provider of ICT products and solutions to enterprise customers worldwide. Our range of data centre switches covers multiple speeds, catering to diverse business needs. We offer personalised customisation services to tailor exclusive solutions for you and assist with network upgrades.

Explore the FS website today, choose the products and solutions that best suit your requirements, and build a high-performance network.

Network Virtualisation: VXLAN Benefits & Differences

With the rapid development of cloud computing and virtualisation technologies, data centre networks are facing increasing challenges. Traditional network architectures have limitations in meeting the demands of large-scale data centres, particularly in terms of scalability, isolation, and flexibility. To overcome these limitations and provide better performance and scalability for data centre networks, VXLAN (Virtual Extensible LAN) has emerged as an innovative network virtualisation technology. This article will detail the principles and advantages of VXLAN, its applications in data centre networks, and help you understand the differences between VXLAN and VLAN.

The Power of VXLAN: Transforming Data Centre Networks

VXLAN is a network virtualisation technology designed to overcome the limitations of traditional Ethernet, offering enhanced scalability and isolation. It enables the creation of a scalable virtual network on existing infrastructure, allowing virtual machines (VMs) to move freely within a logical network, regardless of the underlying physical network topology. VXLAN achieves this by creating a virtual Layer 2 network over an existing IP network, encapsulating traditional Ethernet frames within UDP packets for transmission. This encapsulation allows VXLAN to operate on current network infrastructure without requiring extensive modifications.

VXLAN uses a 24-bit VXLAN Network Identifier (VNI) to identify virtual networks, allowing multiple independent virtual networks to coexist simultaneously. The destination MAC address of a VXLAN packet is replaced with the MAC address of the virtual machine or physical host within the VXLAN network, enabling communication between virtual machines. VXLAN also supports multipath transmission through MP-BGP EVPN and provides multi-tenant isolation within the network.

How it works

  • Encapsulation: When a virtual machine (VM) sends an Ethernet frame, the VXLAN module encapsulates it in a UDP packet. The source IP address of the packet is the IP address of the host where the VM resides, and the destination IP address is that of the remote endpoint of the VXLAN tunnel. The VNI field in the VXLAN header identifies the target virtual network. The UDP packet is then transmitted through the underlying network to reach the destination host.
  • Decapsulation: Upon receiving a VXLAN packet, the VXLAN module parses the UDP packet header to extract the encapsulated Ethernet frame. By examining the VNI field, the VXLAN module identifies the target virtual network and forwards the Ethernet frame to the corresponding virtual machine or physical host.

This process of encapsulation and decapsulation allows VXLAN to transparently transport Ethernet frames over the underlying network, while simultaneously providing logically isolated virtual networks.

Key Components

  • VXLAN Identifier (VNI): Used to distinguish different virtual networks, similar to a VLAN identifier.
  • VTEP (VXLAN Tunnel Endpoint): A network device responsible for encapsulating and decapsulating VXLAN packets, typically a switch or router.
  • Control Plane and Data Plane: The control plane is responsible for establishing and maintaining VXLAN tunnels, while the data plane handles the actual data transmission.

The Benefits of VXLAN: A Changer for Virtual Networks

VXLAN, as an emerging network virtualisation technology, offers several advantages in data centre networks:

Scalability

VXLAN uses a 24-bit VNI identifier, supporting up to 16,777,216 virtual networks, each with its own independent Layer 2 namespace. This scalability meets the demands of large-scale data centres and supports multi-tenant isolation.

Cross-Subnet Communication

Traditional Ethernet relies on Layer 3 routers for forwarding across different subnets. VXLAN, by using the underlying IP network as the transport medium, enables cross-subnet communication within virtual networks, allowing virtual machines to migrate freely without changing their IP addresses.

Flexibility

VXLAN can operate over existing network infrastructure without requiring significant modifications. It is compatible with current network devices and protocols, such as switches, routers, and BGP. This flexibility simplifies the creation and management of virtual networks.

Multipath Transmission

VXLAN leverages multipath transmission (MP-BGP EVPN) to achieve load balancing and redundancy in data centre networks. It can choose the optimal path for data transmission based on network load and path availability, providing better performance and reliability.

Security

VXLAN supports tunnel encryption, ensuring data confidentiality and integrity over the underlying IP network. Using secure protocols (like IPsec) or virtual private networks (VPNs), VXLAN can offer a higher level of data transmission security.

VXLAN vs. VLAN: Unveiling the Key Differences

VXLAN (Virtual Extensible LAN) and VLAN (Virtual Local Area Network) are two distinct network isolation technologies that differ significantly in their implementation, functionality, and application scenarios.

Implementation

VLAN: VLAN is a Layer 2 (data link layer) network isolation technology that segments a physical network into different virtual networks using VLAN identifiers (VLAN IDs) configured on switches. VLANs use VLAN tags within a single physical network to identify and isolate different virtual networks, achieving isolation between different users or devices.

VXLAN: VXLAN is a Layer 3 (network layer) network virtualisation technology that extends Layer 2 networks by creating virtual tunnels over an underlying IP network. VXLAN uses VXLAN Network Identifiers (VNIs) to identify different virtual networks and encapsulates original Ethernet frames within UDP packets to enable communication between virtual machines, overcoming physical network limitations.

Functionality

VLAN: VLANs primarily provide Layer 2 network segmentation and isolation, allowing a single physical network to be divided into multiple virtual networks. Different VLANs are isolated from each other, enhancing network security and manageability.

VXLAN: VXLAN not only provides Layer 2 network segmentation but also creates virtual networks over an underlying IP network, enabling extensive dynamic VM migration and inter-data centre communication. VXLAN offers greater network scalability and flexibility, making it suitable for large-scale cloud computing environments and virtualised data centres.

Application Scenarios

VLAN: VLANs are suitable for small to medium-sized network environments, commonly found in enterprise LANs. They are mainly used for organisational user segmentation, security isolation, and traffic management.

VXLAN: VXLAN is ideal for large data centre networks, especially in cloud computing environments and virtualised data centres. It supports large-scale dynamic VM migration, multi-tenant isolation, and network scalability, providing a more flexible and scalable network architecture.

These distinctions highlight how VXLAN and VLAN cater to different networking needs and environments, offering tailored solutions for varying levels of network complexity and scalability.

Enhancing Data Centres with VXLAN Technology

The application of VXLAN enhances the flexibility, efficiency, and security of data centre networks, forming a crucial part of modern data centre virtualisation. Here are some typical applications of VXLAN in data centres:

Virtual Machine Migration

VXLAN allows virtual machines to migrate freely between different physical hosts without changing IP addresses. This flexibility and scalability are vital for achieving load balancing, resource scheduling, and fault tolerance in data centres.

Multi-Tenant Isolation

By using different VNIs, VXLAN can divide a data centre into multiple independent virtual networks, ensuring isolation between different tenants. This isolation guarantees data security and privacy for tenants and allows each tenant to have independent network policies and quality of service guarantees.

Inter-Data Centre Connectivity

VXLAN can extend across multiple data centres, enabling the establishment of virtual network connections between them. This capability supports resource sharing, business expansion, and disaster recovery across data centres.

Cloud Service Providers

VXLAN helps cloud service providers build highly scalable virtualised network infrastructures. By using VXLAN, cloud service providers can offer flexible virtual network services and support resource isolation and security in multi-tenant environments.

Virtual Network Functions (VNF)

Combining VXLAN with Network Functions Virtualisation (NFV) enables the deployment and management of virtual network functions. VXLAN serves as the underlying network virtualisation technology, providing flexible network connectivity and isolation for VNFs, thus facilitating rapid deployment and elastic scaling of network functions.

Conclusion

In summary, VXLAN offers powerful scalability, flexibility, and isolation, providing new directions and solutions for the future development of data centre networks. By utilising VXLAN, data centres can achieve virtual machine migration, multi-tenant isolation, inter-data centre connectivity, and enhanced support for cloud service providers.

How FS Can Help

As an industry-leading provider of network solutions, FS offers a variety of high-performance data centre switches supporting multiple protocols, such as MLAG, EVPN-VXLAN, link aggregation, and LACP. FS switches come pre-installed with PicOS®, equipped with comprehensive SDN capabilities and the compatible AmpCon™ management software. This combination delivers a more resilient, programmable, and scalable network operating system (NOS) with lower TCO. The advanced PicOS® and AmpCon™ management platform enables data centre operators to efficiently configure, monitor, manage, and maintain modern data centre fabrics, achieving higher utilisation and reducing overall operational costs.

Register on the FS website now to enjoy customised solutions tailored to your needs, optimising your data centre for greater efficiency and benefits.

Data Centre Connectivity: The Surge of Coherent Optical Transceiver Technology

According to the optical transceiver report from the Yole Group, the revenue generated by optical transceivers in 2022 was approximately $11 billion. Forecasts indicate substantial growth in this field, with projections reaching $22.2 billion by 2028.

As data centres witness increased investments and rapid growth in traffic, the optical module market undergoes a transformative phase. The mainstream adoption of silicon photonics technology in optical transceivers is a key trend fueling this evolution, as data centre operators aim to maximise their infrastructure capabilities.

Click to learn more about the trends in the data centre optical module market: New Trends of Optical Transceiver Market in Data Centers | FS Community

Advancements in Coherent Optical Module Technology and Standardization Trends

Coherent technology has emerged as the leading solution for Data Center Interconnect (DCI) applications, spanning distances of 80 to 120 km in data communication. The evolution of applications has brought forth new demands for coherent optical transceiver systems. This shift has led to the development of coherent transceiver units, transitioning from initial integration with line cards and Multi-Source Agreements (MSA) transceivers to independent, standardized pluggable optical transceivers.

The latest advancements in Complementary Metal-Oxide-Semiconductor (CMOS) technology digital signal processor (DSP) chips and integrated photonics technology have paved the way for developing smaller, lower power-consuming pluggable coherent optical transceivers. The trajectory of coherent optical modules applied in metropolitan and backbone networks is characterized by high speed, miniaturization, low power consumption, and standardization of interoperability.

Presently, commercial coherent technology has progressed to support single-wavelength 800G transmission. Nonetheless, the industry lacks standardized specifications for 800G. In contrast, 400G coherent technology has reached maturity, adhering to standards like 400ZROpenROADM, and OpenZR+. The Optical Internetworking Forum (OIF) is currently deliberating on the next-generation coherent technology standard, tentatively named 800ZR.

Coherent Modulation vs. PAM4 in 800G Optical Transmission

Coherent modulation used in coherent optical communication involves altering the frequency, phase, and amplitude of the optical carrier to transmit signals. Unlike intensity detection, coherent modulation requires coherent light with clear frequency and phase, primarily used for high-speed and long-distance transmission. PAM4 is suitable for high-speed, medium-short distance transmission, making it ideal for internal connections in next-generation data centres.

For example, FS OSFP 800G SR8 optical transceivers employ PAM4 modulation, suitable for use in InfiniBand NDR end-to-end systems, designed for Quantum-2 air-cooled switches. They are the ideal solution for the supercomputing and artificial intelligence industries, seamlessly integrating into compute and storage infrastructures, ensuring efficient high-performance connectivity.

In the context of long-distance Data Center Interconnect (DCI) scenarios, PAM4 faces competition from coherent modulation based on the 400ZR protocol. As data centre speeds enter the era of 800G, the differences between PAM4 and coherent technology are gradually diminishing. The competitiveness of each technology depends on factors such as cost and power consumption.

Choosing Between InP and Silicon Photonics

In the context of coherent technology, the choice between InP (Indium Phosphide) and silicon photonics for I/Q modulators and receivers becomes crucial. Despite being cost-effective, silicon photonics exhibits lower performance, known for its high peak voltage and limited bandwidth. In contrast, InP offers lower peak voltage and superior bandwidth but at a higher cost. In PAM4 and coherent technologies, InP transceivers are often more expensive, while silicon photonics provides a more economical alternative.

Coherent vs. PAM4 in High-Speed Transmission

Regarding power consumption, with the evolution of chip technology from 7nm to 5nm and even 3nm, enhancement is not limited to an increase in DSP processing rates. It also extends to superior power reduction performance.

Conclusion

Several companies have validated these methods through experiments. FS believes that with increased production and reduced costs, coherent methods can achieve cost competitiveness with PAM4 by requiring only a laser, modulator, and receiver. This remains true even as optical equipment becomes more complex. Consistency in solutions enables higher flexibility and performance, distinguishing them. In conclusion, the competition between coherent transmission technology and PAM4 transmission technology continues, with future developments determining the mainstream approach.

As a leading solutions provider in the industry, FS has an abundant stock of 800G modules, ensuring your needs are met from quality to rapid delivery. Visit the FS website now for more product and solution information.

Read more about the detailed content on coherent modules: Advancements in Coherent Optical Module Technology and Standardization Trends | FS Community

Coherent Modulation vs. PAM4 in 800G Optical Transmission | FS Community

Unlocking the Potential of 800G Transceivers: Types and Applications

With the ever-increasing need for swift data transmission, the 800G transceiver has garnered considerable interest for its attributes such as high bandwidth, rapid transmission rates, outstanding performance, compact design, and future-proof compatibility. In this article, we aim to provide an overview of the diverse range of 800G optical modules and delve into their applications to assist you in making an informed decision when selecting 800G transceivers.

Exploring the Range of 800G Transceivers

Based on the single-channel rate, 800G transceivers can be categorised into 100G and 200G variants. The diagram below illustrates the corresponding architectures. Single-channel 100G optical modules can be deployed more readily, whereas 200G optical modules demand more sophisticated optical devices and necessitate a gearbox for conversion. This section primarily focuses on single-channel 100G modules.

Single-Mode 800G Transceivers:

The 800G single-mode optical transceiver is suitable for long-distance optical fibre transmission and can cover a wider network range.

800G DR8, 800G PSM8 & 800G 2xDR4:

These three standards share similar internal architectures, featuring 8 Tx and 8 Rx, with a single-channel rate of 100 Gbps, and requiring 16 optical fibers.

The 800G DR8 optical module utilises 100G PAM4 and 8-channel single-mode parallel technology, enabling transmission distances of up to 500m through single-mode optical fibre. Primarily deployed in data centres, it serves 800G-800G, 800G-400G, and 800G-100G interconnections.

The 800G PSM8 makes use of CWDM technology with 8 optical channels, each capable of delivering 100Gbps. It supports a transmission distance of 100m, making it well-suited for long-distance transmission and efficient fibre resource sharing.

On the other hand, the 800G 2DR4 configuration denotes 2x “400G-DR4” interfaces. It features 2x MPO-12 connectors, allowing for the creation of 2 physically distinct 400G-DR4 links from each 800G transceiver without the need for optical breakout cables. As illustrated in the figure below, it can be connected to 400G DR4 transceivers and supports a transmission distance of 500m, facilitating smooth data centre upgrades.

800G 2FR4/2LR4/FR4/FR8:

FR and LR stand for Fixed Reach and Long Reach.

800G 2xFR4 and 800G 2xLR4 share similar internal structures. They operate with 4 wavelengths at a single-channel rate of 100 Gbps. Using Mux, they reduce the required optical fibres to 4, as depicted in the figure below. 800G 2xFR4 can transmit up to 2km, while 800G 2xLR4 supports distances of up to 10km. Both standards use dual CS or dual duplex LC interfaces for optical connectivity. They are suitable for various applications including 800G Ethernet, breakout 2x 400G FR4/LR4, data centres, and cloud networks.

800G FR4 follows a scheme that utilises four wavelengths and PAM4 technology, operating at a single-channel rate of 200 Gbps and requiring two optical fibres, as shown in the figure below. It supports a transmission distance of 2km and is generally used in data centre interconnection, high-performance computing, storage networks, etc.

Lastly, the 800G FR8 utilises eight wavelengths, with each operating at 100 Gbps, as illustrated in the figure below. It necessitates two optical fibres and can transmit up to 2km. Additionally, the 800G FR8 offers increased transmission capacity. Typical applications include wide-area networking, data centre interconnection, and more.

Multimode 800G Transceivers

In multimode applications with transmission distances under 100 meters, there are primarily two standards for 800G optical transceivers.

800G SR8

The 800G SR8 optical transceiver utilises VCSEL technology, offering advantages such as low power consumption, cost-effectiveness, and high reliability. With a wavelength of 850nm and a single-channel speed of 100Gbps PAM4, it requires 16 optical fibres, representing an enhanced version of the 400G SR4 with double the channels. Capable of achieving high-speed 800G data interconnection within 100m, it enhances data transmission efficiency in data centres. It employs either an MPO16 or Dual MPO-12 optical interface, as shown in the diagram. Typically used in various scenarios such as data centres, communication networks, and supercomputing, the 800G SR8 optical module is versatile and efficient.

800G SR4.2

800G SR4.2 optical transceiver employs two wavelengths, 850nm and 910nm, enabling bidirectional transmission over a single fibre, commonly known as bi-directional transmission. The module incorporates a DeMux component to separate the two wavelengths. With a single-channel rate of 100 Gbps PAM4, it requires 8 optical fibres, half the amount needed for SR8. The 800G SR4.2 makes use of a 4+4 fibre setup within an MPO-12 connector interface, offering a seamless transition from 400G to 800G without the need for alterations to the fibre infrastructure.

Unleashing Potential: Applications of 800G Transceiver

In the realm of high-performance networking, the evolution of 800G transceivers has ushered in a new era of possibilities. The high-speed, efficient, and reliable data transmission capabilities of 800G transceivers have led to their widespread adoption across multiple scenarios.

Data Center Connectivity

Data Center Interconnectivity is one of the primary domains where the prowess of 800G optical modules shines. With InfiniBand, these modules facilitate seamless communication between data centers, powering the backbone of modern interconnected infrastructures. The substantial increase in data processing capability and data transmission efficiency in data centres has been essential to meet the evolving demands of cloud computing and big data processing.

High-Performance Computing

In the arena of High-Performance Computing, where processing demands are ceaselessly escalating, the efficiency of 800G transceives becomes a game-changer. The modules ensure rapid data transfer, reducing latency, and optimizing overall system performance.

5G and Communication Networks

The surge of 5G and Communication Networks demands not only speed but also reliability. Enter the 800G QSFP and QSFP-dd transceivers, engineered to meet the demands of next-gen communication networks. Their advanced capabilities bolster the 5G architecture, ensuring a robust and responsive network infrastructure. The development has also fostered advancements in various fields such as the Internet of Things (IoT), Industrial Internet, and autonomous driving.

In the Metropolitan Area Network (Man) Domain

The metropolitan area network (MAN) serves as a bridge between local area networks (LANs) and wide area networks (WANs) across different locations, enabling high-speed data transmission between these locations through fibre optic networks. The high transmission rate of 800G optical modules can provide higher bandwidth and more stable connections, reducing data transmission delays between MANs. This improves data transfer rates and network responsiveness, fostering urban informatization and economic development.

Conclusion

800G optical transceivers, integral to the forthcoming high-speed optical communication era, come in diverse types catering to various application requirements. A comprehensive grasp of these types and their respective application domains, along with addressing common queries about 800G transceivers, will facilitate the advancement of data transmission technology. The mastery of this cutting-edge technology enables us to adeptly navigate the challenges and prospects presented by the digital era.

How FS can Help

FS offers a range of 800G transceivers to meet Ethernet and InfiniBand network connectivity needs. Additionally, FS’s overseas warehouses enable swift deliveries. Visit the FS website now for more product and solution information, and benefit from comprehensive service support.

Exploring FS 800G Transceivers: Your FAQs Answered

With the rapid development of technologies such as cloud computing, the Internet of Things (IoT) and big data, there’s a growing need for network bandwidth and faster transmission speeds. The introduction of the 800G module addresses this demand for high-speed data transmission. FS 800G transceivers incorporate advanced modulation and demodulation techniques alongside high-density optoelectronic devices, enabling them to achieve higher transmission rates in a compact form factor. Here are some FAQs about FS 800G optical transceivers.

What form-factors are used for 800G transceivers?

800G transceivers share the same form factors as 400G optics, namely OSFP and QSFP-DD. FS supports both form factors.

OSFP:

The OSFP, or “Octal Small Form-factor Pluggable,” derives its name from its 8 electrical lanes, each modulated at 100Gb/s for a total bandwidth of 800Gb/s in 800G configurations.

QSFP-DD:

The QSFP-DD, or “Quad Small Form-factor Pluggable – Double Density,” retains the QSFP form factor but adds an extra row of electrical contacts for more high-speed electrical lanes. With 8 lanes operating at 100Gb/s each, the QSFP-DD delivers a total bandwidth of 800Gb/s.

QSFP-DD and OSFP are distinct optical module packaging types. QSFP-DD, being smaller, is ideal for high-density port configurations. And OSFP consumes slightly more power compared to QSFP-DD. Additionally, QSFP-DD is fully compatible with QSFP56, QSFP28, and QSFP+, whereas OSFP is not.

For more details on the differences between 800G OSFP and QSFP-DD packaging, please refer to:800G Transceiver Overview: QSFP-DD and OSFP Packages

Can OSFPs be plugged into a QSFP-DD port, or QSFP-DD’s plugged into an OSFP port?

No. The OSFP and the QSFP-DD are two physically distinct form factors. OSFP systems require the use of OSFP optics and cables, while QSFP-DD systems necessitate QSFP-DD optics and cables.

How many electrical lanes are used by 800G transceivers?

The 800G transceivers utilise 8x electrical lanes in each direction, with 8 transmit lanes and 8 receive lanes.

What are the speed and modulation formats used by 800G OSFP/QSFP-DD modules?

As mentioned earlier, all 800G modules utilise 8x electrical lanes bidirectionally, with 8 transmit lanes and 8 receive lanes. Each lane operates at a data rate of 100G PAM4, yielding a total module bandwidth of 800Gb/s. Furthermore, the optical output of all 800G transceivers consists of 8 optical waves, each wave modulated at 100G PAM4 per lane.

What is the significance of PAM4 or NRZ modulation for electrical or optical channels?

NRZ, which stands for “Non Return to Zero,” refers to a modulation scheme used in electrical or optical data channels. It involves two permissible amplitude levels or symbols, with one level representing a digital ‘1’ and the other representing a digital ‘0’. NRZ is commonly employed for data transmission up to 25Gb/s and is the simplest method for transmitting digital data. An example of an NRZ waveform, along with an eye diagram illustrating NRZ data, is depicted below. An eye diagram provides a visual representation of a modulation scheme, with each symbol overlapping one another.

PAM4, on the other hand, stands for Pulse Amplitude Modulation – 4, with the ‘4’ signifying the number of distinct amplitude levels or symbols in the electrical or optical signal carrying digital data. In this case, each amplitude level or symbol represents two bits of digital data. Consequently, a PAM4 waveform can transmit twice as many bits as an NRZ waveform at the same symbol or “Baud” rate. The diagram below showcases a PAM4 waveform along with an eye diagram for PAM4 data.

For more information on the comparison between NRZ and PAM4, please refer to:NRZ vs. PAM4 Modulation Techniques

What is the maximum power consumption of 800G OSFP and QSFP-DD transceivers?

The power consumption of 800G transceivers varies between 13W and 18W per port. To obtain specific power consumption values for individual modules, please consult each transceiver’s datasheet.

Do FS 800G transceivers support backward compatibility?

The backward compatibility of 800G transceivers depends on the specific design and implementation. Some 800G transceivers are designed to be backwards compatible with 400G or 200G transceivers, allowing for a smooth transition and interoperability within existing networks. For example, the FS 800G OSFP SR8 transceiver supports 800G ethernet and breakout 2x 400G SR4 applications. However, it is important to check with the module manufacturer for specific compatibility details.

What standards govern 800G transceivers?

Standards for 800G transceivers, such as form factor specifications, electrical interfaces, and signalling protocols, are typically governed by industry consortiums like the IEEE (Institute of Electrical and Electronics Engineers), the OIF (Optical Internetworking Forum), and the QSFP-DD MSA (Quad Small Form Factor Pluggable – Double Density Multi-Source Agreement).

What 800G Transceivers are available from FS?

FS supports 800G optical transceivers in both OSFP and QSFP-DD form factors. The key features of an FS 800G optical module typically include supporting multiple modulation formats, high data transfer rates, low power consumption, advanced error correction mechanisms, compact form factors (e.g., QSFP-DD or OSFP), and interoperability with existing network infrastructure. The tables below summarise the 800G transceiver connectivity options supported.

QSFP-DD Part No.Product DescriptionOSFP Part No.Product Description
QDD-SR8-800GGeneric Compatible QSFP-DD 800GBASE-SR8 PAM4 850nm 50m DOM MPO-16/APC MMF Optical Transceiver ModuleOSFP-SR8-800GNVIDIA InfiniBand MMA4Z00-NS Compatible OSFP 800G SR8 PAM4 2 x SR4 850nm 50m DOM Dual MPO-12/APC NDR MMF Optical Transceiver Module, Finned Top
QDD-DR8-800GGeneric Compatible QSFP-DD 800GBASE-DR8 PAM4 1310nm 500m DOM MPO-16/APC SMF Optical Transceiver Module, Support 2 x 400G-DR4 and 8 x 100G-DROSFP-DR8-800GNVIDIA InfiniBand MMS4X00-NM Compatible OSFP 800G DR8 PAM4 2 x DR4 1310nm 500m DOM Dual MPO-12/APC NDR SMF Optical Transceiver Module, Finned Top
QDD800-PLR8-B1Generic Compatible QSFP-DD 800GBASE-PLR8 PAM4 1310nm 10km DOM MPO-16/APC SMF Optical Transceiver Module, Support 2 x 400G-PLR4 and 8 x 100G-LROSFP-2FR4-800GNVIDIA InfiniBand MMS4X50-NM Compatible OSFP 800G 2FR4 PAM4 1310nm 2km DOM Dual Duplex LC/UPC NDR SMF Optical Transceiver Module, Finned Top

What are the advantages of upgrading to 800G technology?

Moving to 800G technology offers several benefits for network infrastructure and data-intensive applications:

  1. Increased Bandwidth: 800G technology offers a significant increase in bandwidth, enabling faster and more efficient data transmission, meeting the growing demand for high-speed data transfer across various industries.
  2. Higher Data Rates: With 800G technology, data rates of up to 800Gbps can be achieved, enabling faster data processing, reduced latency, and improved overall network performance.
  3. Future-Proofing: Adopting 800G technology allows organizations to future-proof their network infrastructure, ensuring compatibility with upcoming technologies and applications.

Conclusion

The advent of 800G technology represents a pivotal advancement in addressing the escalating demands for network bandwidth and faster transmission speeds in our rapidly evolving digital landscape. FS 800G transceivers, with their seamless compatibility with existing network infrastructure, offer a compelling solution for organisations seeking to enhance their data transmission capabilities.

Upgrade to FS 800G optical transceivers today to experience unparalleled performance, and increased bandwidth for the challenges and opportunities of tomorrow.

Unveiling 800G Transceivers: QSFP-DD vs. OSFP Packages

While the current surge in demand is for 400G optical modules, the 800G optical network is gearing up for high-speed, high-density ports and low-latency DCI. The 800G transceiver can handle 8 billion bits per second, over twice the capacity of the previous 400G generation. This article delves into the key 800G module packages: QSFP-DD and OSFP.

What Is the Development Trend of 800G Transceiver Packaging?

The optical module is a crucial optoelectronic device facilitating photoelectric conversion in optical communication, essential to the industry. From GBIC to smaller SFP and now 800G QSFP-DD and OSFP, fibre transceiver form factors have evolved. The 800G transceiver’s progress focuses on speed, miniaturisation, and hot-swappable capability. Its applications span Ethernet, CWDM/DWDM, connectors, Fibre Channels, wired and wireless access, covering both data communication and telecom markets.

800G Transceiver Form Factors Advantages

800G QSFP-DD Form Factor:

The QSFP-DD is a dual-density, four-channel small pluggable high-speed transceiver, currently favoured for 800G optical applications, aiding data centres in flexible scalability. It employs 8-channel electrical interfaces, supporting rates up to 25Gb/s (NRZ modulation) or 50Gb/s (PAM4 modulation) per channel, offering aggregation solutions of up to 200Gb/s or 400Gb/s.

Advantages of the 800G QSFP-DD:

  1. Backward compatibility with QSFP+/QSFP28/QSFP56 packages.
  2. Utilises a 2×1 stacked integrated cage and connector, supporting single-height and double-height cage connector systems.
  3. Features SMT connectors and 1xN cages, optimising heat capacity to at least 12 watts per module, reducing heat dissipation costs.
  4. Designed with flexibility in mind by the MSA working group, adopting ASIC design, supporting various interface rates, and maintaining backward compatibility (QSFP+/QSFP28), reducing port and deployment costs.

800G OSFP Form Factor:

The OSFP represents a new generation of optical modules, smaller than CFP8 yet slightly larger than QSFP-DD. It features eight high-speed electrical channels supporting 32 OSFP ports on a 1U front panel, enhanced by an integrated heat sink for superior heat dissipation.

Advantages of the 800G OSFP:

  1. OSFP is designed with an 8-channel (Octal or 8-lane) configuration, supporting a total throughput of up to 800G, enabling greater bandwidth density.
  2. Its support for more channels and higher data transfer rates translates to enhanced performance and longer transmission distances.
  3. The OSFP module boasts excellent thermal design, capable of handling higher power consumption effectively.
  4. With a larger form factor, OSFP is poised to support higher rates in the future, potentially reaching 1.6T or higher due to its increased power handling capacity.

800G Transceiver Form Factors Parameter Comparison:

QSFP-DDOSFP
Size(length*width*height)89.4mm*18.35mm*8.5mm107.8mm*22.58mm*13.0mm
Electrical Lanes88
Single Lane Rate25Gbps/50Gbps/100Gbps25Gbps/50Gbps/100Gbps
Total Max Data Rate200G/400G/800G200G/400G/800G
ModulationNRZ/PAM4NRZ/PAM4
Backward Compatibility with QSFP+/QSFP28YesNo
Port density in 1U3636
Bandwidth in 1U14.4Tb/s14.4Tb/s
Power consumption Upper Threshold12W15W
ProductsTransceiver Modules; DAC & AOC cablesTransceiver Modules; DAC & AOC cables

Fibre producers favour OSFP and QSFP-DD. While the latter is typically preferred for telecommunications applications, the former is seen as more suitable for data centre environments.

How to Choose 800G Transceiver for Your Data Center?

To select the appropriate 800G transceiver for your network application, thorough evaluation of factors like transmission distance, fibre type, and form factor is crucial.

The 800G QSFP-DD module utilises Broadcom 7nm DSP chip and COB packaging, with an MTP/MPO-16 connector. However, different models of the 800G QSFP-DD module vary in power consumption and transmission distance. It is suitable for high-speed network environments such as data centres, cloud computing, and large-scale networks, meeting the demand for high bandwidth and large-capacity data transmission.

FS P/NPower ConsumptionDistanceSMF/MMF
QDD-SR8-800G≤13W50mMMF
QDD800-PLR8-B1≤18W10kmSMF
QDD800-XDR8-B1≤18W2kmSMF
QDD-DR8-800G≤18W500mSMF

The 800G OSFP module also features Broadcom 7nm DSP chip and COB packaging. However, it comes in two types: Ethernet and Infinite Bandwidth, with variations in power consumption and connectors between different models. It is suitable for networks like data centres, cloud computing, and ultra-large-scale networks.

FS P/NPower ConsumptionConnectorDistanceSMF/MMF
OSFP800-2LR4-A2≤18WDual LC Duplex10kmSMF
OSFP800-PLR8-B1≤16.5WMTP/MPO-1610kmSMF
OSFP800-PLR8-B2≤16.5WDual MTP/MPO-1210kmSMF
OSFP-2FR4-800G≤18WDual LC Duplex2kmSMF
OSFP800-XDR8-B1≤16.5WMTP/MPO-162kmSMF
OSFP800-XDR8-B2≤16.5WDual MTP/MPO-122kmSMF
OSFP800-DR8-B1≤16.5WMTP/MPO-16500mSMF
OSFP-DR8-800G≤16WDual MTP/MPO-12500mSMF
OSFP-SR8-800G≤15WDual MTP/MPO-1250mMMF
OSFP-DR8-800G≤16.5WDual MTP/MPO-12500mSMF
OSFP-2FR4-800G≤16.5WDual MTP/MPO-122kmSMF

Conclusion

As technology continues to progress and innovate, we anticipate 800G optical modules will increasingly contribute to practical applications and drive advancements in the digital communication sector.

FS offers a range of 800G optical modules to meet your network construction needs. Visit the FS website for information and enjoy free technical support.

Managed vs Unmanaged vs Smart Switch: Understanding the Distinctions

Switches form the backbone of LANs, efficiently connecting devices within a specific LAN and ensuring effective data transmission among them. There are three main types of switches: managed switches, smart managed switches, and unmanaged switches. Choosing the right switch during network infrastructure upgrades can be challenging. In this article, we delve into the differences between these three types of switches to help determine which one can meet your actual network requirements.

What are Managed Switches, Unmanaged Switches and Smart Switches?

Managed switches typically use SNMP protocol, allowing users to monitor the switch and its port statuses, enabling them to read throughput, port utilisation, etc. These switches are designed and configured for high workloads, high traffic, and custom deployments. In large data centres and enterprise networks, managed switches are often used in the core layer of the network.

Unmanaged switches, also known as dumb switches, are plug-and-play devices with no remote configuration, management, or monitoring options. You cannot log in to an unmanaged switch or read any port utilisation or throughput of the devices. However, unmanaged switches are easy to set up and are used in small networks or adding temporary device groups to large networks to expand Ethernet port counts and connect network hotspots or edge devices to small independent networks.

Smart managed switches are managed through a web browser, allowing users to maintain their network through intuitive guidance. These smart Ethernet switches are particularly suitable for enterprises needing remote secure management and troubleshooting, enabling network administrators to monitor and control traffic for optimal network performance and reliability. Web smart managed switches have become a viable solution for small and medium-sized enterprises, with the advantage of being able to change the switch configuration to meet specific network requirements.

What is the Difference Between Them?

Next, we will elaborate on the differences between these three types of switches from the following three aspects to help you lay the groundwork for purchasing.

Configuration and Network Performance

Managed switches allow administrators to configure, monitor, and manage them through interfaces such as Command Line Interface (CLI), web interface, or SNMP. They support advanced features like VLAN segmentation, network monitoring, traffic control, protocol support, etc. Additionally, their advanced features enable users to recover data in case of device or network failures. On the other hand, unmanaged switches come with pre-installed configurations that prevent you from making changes to the network and do not support any form of configuration or management. Smart managed switches, positioned between managed and unmanaged switches, offer partial management features such as VLANs, QoS, etc., but their configuration and management options are not as extensive as fully managed switches and are typically done through a web interface.

Security Features

The advanced features of managed switches help identify and swiftly eliminate active threats while protecting and controlling data. Unmanaged switches do not provide any security features. In contrast, smart managed switches, while also offering some security features, usually do not match the comprehensiveness or sophistication of managed switches.

Cost

Due to the lack of management features, unmanaged switches are the least expensive. Managed switches typically have the highest prices due to the advanced features and management capabilities they provide. Smart managed switches, however, tend to be lower in cost compared to fully managed switches.

FeaturesPerformanceSecurityCostApplication
Managed SwitchComprehensive functionsMonitoring and controlling a whole networkHigh-levels of network securityExpensiveData center, large size enterprise networks
Smart Managed SwitchLimited but intelligent functionsIntelligent manage via a Web browserBetter network securityCheapSMBs, home offices
Unmanaged SwitchFixed configurationPlug and play with limited configurationNo security capabilitiesAffordableHome, conference rooms

How to Select the Appropriate Switch?

After understanding the main differences between managed, unmanaged, and smart managed switches, you should choose the appropriate switch type based on your actual needs. Here are the applications of these three types of switches, which you can consider when making a purchase:

  • Managed switches are suitable for environments that require highly customised and precise network management, such as large enterprise networks, data centres, or scenarios requiring complex network policies and security controls.
  • Smart managed switches are suitable for small and medium-sized enterprises or departmental networks that require a certain level of network management and flexible configuration but may not have the resources or need to maintain the complex settings of a fully managed switch.
  • Unmanaged switches are ideal for home use, small offices, or any simple network environment that does not require complex configuration and management. Unmanaged switches are the ideal choice when the budget is limited, and network requirements are straightforward.

In brief, the choice of switch type depends on your network requirements, budget, and how much time you are willing to invest in network management. If you need high control and customisation capabilities, a managed switch is the best choice. If you are looking for cost-effectiveness and a certain level of control, a smart managed switch may be more suitable. For the most basic network needs, an unmanaged switch provides a simpler and more economical solution.

Conclusion

Ultimately, selecting the appropriate switch type is essential to achieve optimal network performance and efficiency. It is important to consider your network requirements, budget, and management preferences when making this decision for your network infrastructure.

As a leading global provider of networking products and solutions, FS not only offers many types of switches, but also customised solutions for your business network. For more product or technology-related knowledge, you can visit FS Community.

Discovering Powerful FS Enterprise Switches for Your Network

Enterprise switches are specifically designed for networks with multiple switches and connections, often referred to as campus LAN switches. These switches are tailored to meet the needs of enterprise networks, which typically follow a three-tier hierarchical design comprising core, aggregation, and access layers. Each layer serves distinct functions within the network architecture. In this guide, we’ll delve into the intricacies of enterprise switches and discuss important factors to consider when buying them.

Data Centre, Enterprise, and Home Network Switches: Key Differences

Switch vendors provide network switches designed for different network environments. The following comparison will help you understand more about enterprise switches:

Data Centre Switches

These switches have high port density and bandwidth requirements, handling both north-south traffic (traffic between data centre external users and servers or between data centre servers and the Internet) and east-west traffic (traffic between servers within the data centre).

Enterprise Switches

They need to track and monitor users and endpoint devices to protect each connection point from security issues. Some have special features to meet specific network environments, such as PoE capabilities. With PoE technology, enterprise network switches can manage the power consumption of many endpoint devices connected to the switch.

Home Network Switches

Home network traffic is not high, meaning the requirements for switches are much lower. In most cases, switches only need to extend network connections and transfer data from one device to another without handling data congestion. Unmanaged plug-and-play switches are often used as the perfect solution for home networks because they are easy to manage, require no configuration, and are more cost-effective than managed switches.

For SOHO offices with fewer than 10 users, a single 16-port Ethernet switch is usually sufficient. However, for tech-savvy users who like to build fast, secure home networks, managed switches are often the preferred choice.

Selecting the Ideal Switch: Data Centre vs. Enterprise Networks

For large enterprise networks, redundancy in the uplink links such as aggregation and core layers should be much higher than in the access layer. This means that high availability should be the primary consideration when designing the network. To cope with high traffic volumes and minimize the risk of failures, it’s advisable to deploy two or more aggregation or core layer switches at each level. This ensures that the failure of one switch does not affect the other.

In complex networks with a large number of servers to manage, network virtualization is needed to optimise network speed and reliability. Data centre switches offer richer functionality compared to traditional LAN enterprise switches, making them crucial for the successful deployment of high-density virtual machine environments and handling the increasing east-west traffic associated with virtualization.

Key Considerations Before Selecting Enterprise Switches

Ethernet switches play a crucial role in enterprise networks, regardless of whether it’s a small or large-scale network. Before you decide to buy enterprise switches, there are several criteria you should consider:

Network Planning

Identify your specific needs, including network scale, purpose, devices to be connected, and future network plans. For small businesses with fewer than 200 users and no expansion plans, a two-tier architecture might suffice. Medium to large enterprises typically require a three-tier hierarchical network model, comprising access, distribution, and core layer switches.

Evaluate Enterprise Switches

Once you’ve established your network architecture, delve deeper into information to make an informed decision.

  • Port Speeds and Wiring Connections: Modern enterprise switches support various port speeds such as 1G Ethernet, 10GE, 40GE, and 100GE. Consider whether you need RJ45 ports for copper connections or SFP/SFP+ ports for fibre connections based on your wiring infrastructure.
  • Installation Environment: Factor in the switch’s dimensions, operating temperature, and humidity based on the installation environment. Ensure adequate rack space and consider switches that can operate in extreme conditions if needed.
  • Advanced Features: Look for advanced features like built-in troubleshooting tools, converged wired or wireless capabilities, and other specific functionalities to meet your network requirements.

Other Considerations

PoE (Power over Ethernet) switches simplify wiring for devices like security cameras and IP phones. Stackable switches offer scalability for future expansion, enhancing network availability. By considering these factors, you can make a well-informed decision when selecting enterprise switches for your network infrastructure.

How to Choose Your Enterprise Switch Supplier

Creating a functional network is often more complex than anticipated. With numerous suppliers offering similar specifications for switches, how do you make the right choice? Here are some tips for selecting a different switch supplier:

  • Once you have an idea of your ideal switch ports and speeds, opt for a supplier with a diverse range of switch types and models. This makes it easier to purchase the right enterprise switches in one go and avoids compatibility and interoperability issues.
  • Understanding hardware support services, costs, and the software offered by switch suppliers can save you from unnecessary complications. Warranty is a crucial factor when choosing a switch brand. Online and offline technical assistance and troubleshooting support are also important considerations.

If you’ve reviewed the above criteria but are still unsure about the feasibility of your plan, seek help from network technicians. Most switch suppliers offer technical support and can recommend products based on your specific needs.

Conclusion

In summary, enterprise switches are essential components of contemporary network infrastructures, meeting the varied requirements of various network environments. When choosing, it’s essential to factor in elements like network planning, port speeds, installation environment, advanced features, and supplier support services. By carefully assessing these criteria and seeking guidance as necessary, you can ensure optimal performance and reliability for the network infrastructure.

How FS Can Help

FS offers a variety of models of enterprise switches and provides high-performance, highly reliable, and premium service network solutions, helping your enterprise network achieve efficient operations. Furthermore, FS not only offers a 5-year warranty for most switches but also provides free software upgrades. Additionally, our 24/7 customer service and free technical support are available in all time zones.