Accelerating Data Centers: FS Unveils Next-Gen 400G Solutions

As large-scale data centers transition to faster and more scalable infrastructures and with the rapid adoption of hyperscale cloud infrastructures and services, existing 100G networks fall short in meeting current demands. As the next-generation mainstream port technology, 400G significantly increases network bandwidth, enhances link utilization, and assists operators, OTT providers, and other clients in effectively managing unprecedented data traffic growth.

To meet the demand for higher data rates, FS has been actively developing a series of 400G products, including 400G switches, optical modules, cables, and network adapters.

FS 400G Switches

The emergence of 400G data center switches has facilitated the transition from 100G to 400G in data centers, providing flexibility for building large-scale leaf and spine designs while reducing the total number of network devices. This reduction can save costs and decrease power consumption. Whether it’s the powerful N9510-64D or the versatile N9550 series, FS 400G data center switches can deliver the performance and flexibility required for today’s data-intensive applications.

Of particular note is that, as open network switches, the N8550 and N9550 series switches can enhance flexibility by freely choosing preferred operating systems. They are designed to meet customer requirements by providing comprehensive support for L3 features, SONiC and Broadcom chips, and data center functionalities. Additionally, FS offers PicOS-based open network switch operating system solutions, which provide a more flexible, programmable, and scalable network operating system (NOS) at a lower total cost of ownership (TCO).

FS 400G Transceivers

FS offers two different types of packaging for its 400G transceivers: QSFP-DD and OSFP, developed to support 400G with performance as their hallmark. Additionally, FS provides CFP2 DCO transceivers for coherent transmission at various rates (100G/200G/400G) in DWDM applications. Moreover, FS has developed InfiniBand cables and transceivers to enhance the performance of HPC networks, meeting the requirements for high bandwidth, low latency, and highly reliable connections.

FS conducts rigorous testing on its 400G optical modules using advanced analytical equipment, including TX/RX testing, temperature measurement, rate testing, and spectrometer evaluation tests, to ensure the performance and compatibility of the optical modules.

FS 400G Cables

When planning 400G Ethernet cabling or connection schemes, it’s essential to choose devices with low insertion loss and good return loss to meet the performance requirements of high-density data center links. FS offers various wiring options, including DAC/AOC cables and breakout cables. FS DAC/AOC breakout cables provide three connection types to meet high-density requirements for standard and combination connector configurations: 4x100G, 2x200G, and 8x50G. Their low insertion loss and ultra-low crosstalk effectively enhance transmission performance, while their high bend flexibility offers cost-effective solutions for short links.

FS 400G Network Adapters

FS 400G network adapters utilize the industry-leading ConnectX-7 series cards. The ConnectX-7 VPI card offers a 400Gb/s port for InfiniBand, ultra-low latency, and delivers between 330 to 3.7 billion messages per second, enabling top performance and flexibility to meet the growing demands of data center applications. In addition to all existing innovative features from previous versions, the ConnectX-7 card also provides numerous enhanced functionalities to further boost performance and scalability.

FS 400G Networking Soluitons

To maximize the utilization of the 400G product series, FS offers comprehensive 400G network solutions, such as solutions tailored for upgrading from 100G to high-density 400G data centers. These solutions provide diverse and adaptable networking options customized for cloud data centers. They are designed to tackle the continuous increase in data center traffic and the growing need for high-bandwidth solutions in extensive 400G data center networks.

For more information about FS 400G products, please read FS 400G Product Family Introduction.

How FS Can Help

Register for an FS account now, choose from our range of 400G products and solutions tailored to your needs, and effortlessly upgrade your network.

Exploring FS 100G EDR InfiniBand Solutions: Powering HPC

In the realm of high-speed processing and complex workloads, InfiniBand is pivotal for HPC and hyperscale clouds. This article explores FS’s 100G EDR InfiniBand solution, emphasizing the deployment of QSFP28 EDR transceivers and cables to boost network performance.

What are the InfiniBand HDR 100G Cables and Transceivers

InfiniBand EDR 100G Active AOC Cables

The NVIDIA InfiniBand MFA1A00-E001, an active optical cable based on Class 1 FDA Laser, is designed for InfiniBand 100Gb/s EDR systems. With lengths ranging from 1m to 100m, these cables offer predictable latency, consuming a max of 3.5W, and enhancing airflow in high-speed HPC environments.

InfiniBand EDR 100G Passive Copper Cables

The NVIDIA InfiniBand MCP1600-E001E30 is available in lengths of 0.5m to 3m. With four high-speed copper pairs supporting up to 25Gb/s, it offers efficient short-haul connectivity. Featuring EEPROM on each QSFP28 port, it enhances host system communication, enabling higher port bandwidth, density, and configurability while reducing power demand in data centers.

InfiniBand EDR 100G Optical Modules

The 100Gb EDR optical modules, packaged in QSFP28 form factor with LC duplex or MTP/MPO-12 connectors, are suitable for both EDR InfiniBand and 100G Ethernet. They can be categorized into QSFP28 SR4, QSEP28 PSM4, QSFP28 CWDM4, and QSFP28 LR4 based on transmission distance requirements.

100Gb InfiniBand EDR System Scenario Applications

InfiniBand has gained widespread adoption in data centers and other domains, primarily employing the spine-leaf architecture. In data centers, transceivers and cables play a pivotal role in two key scenarios: Data Center to User and Data Center Interconnects.

For more on application scenarios, please read 100G InfiniBand EDR Solution.

Conclusion

Amidst the evolving landscape of 100G InfiniBand EDR, FS’s solution emerges as mature and robust. Offering high bandwidth, low latency, and reduced power consumption, it enables higher port density and configurability at a lower cost. Tailored for large-scale data centers, HPC, and future network expansion, customers can choose products based on application needs, transmission distance, and deployment. FS 100G EDR InfiniBand solution meets the escalating demands of modern computational workloads.

Navigating Optimal GPU-Module Ratios: Decoding the Future of Network Architecture

The market’s diverse methods for calculating the optical module-to-GPU ratio lead to discrepancies due to varying network structures. The precise number of optical modules required hinges on critical factors such as network card models, switch models, and the scalable unit count.

Network Card Model

The primary models are ConnectX-6 (200Gb/s, for A100) and ConnectX-7 (400Gb/s, for H100), with the upcoming ConnectX-8 800Gb/s slated for release in 2024.

Switch Model

MQM 9700 switches (64 channels of 400Gb/s) and MQM8700 switches (40 channels of 200Gb/s) are the main types, affecting optical module needs based on transmission rates.

Number of Units (Scalable Unit)

Smaller quantities use a two-tier structure, while larger quantities employ a three-tier structure, as seen in H100 and A100 SuperPODs.

  • H100 SuperPOD: Each unit consists of 32 nodes (DGX H100servers) and supports a maximum of 4 units to form a cluster, using a two-layer switching architecture.
  • A100 SuperPOD: Each unit consists of 20 nodes (DGX A100 servers) and supports a maximum of 7 units to form a cluster. If the number of units exceeds 5, a three-layer switching architecture is required.

Optical Module Demand Under Four Network Configurations

Projected shipments of H100 and A100 GPUs in 2023 and 2024 indicate substantial optical module demands, with a significant market expansion forecasted. The following are four application scenarios:

  • A100+ConnectX6+MQM8700 Three-layer Network: Ratio 1:6, all using 200G optical modules.
  • A100+ConnectX6+MQM9700 Two-layer Network: 1:0.75 of 800G optical modules + 1:1 of 200G optical modules.
  • H100+ConnectX7+MQM9700 Two-layer Network: 1:1.5 of 800G optical modules + 1:1 of 400G optical modules.
  • H100+ConnectX8 (yet to be released)+MQM9700 Three-layer Network: Ratio 1:6, all using 800G optical modules.

For detailed calculations regarding each scenario, you can click on this article to learn more.

Conclusion

As technology progresses, the networking industry anticipates the rise of high-speed solutions like 400G multimode optical modules. FS offers optical modules from 1G to 800G, catering to evolving network demands.

Register for an FS account, select products that suit your needs, and FS will tailor an exclusive solution for you to achieve network upgrades.

Revolutionizing Data Center Networking: From Traditional to Advanced Architectures

As businesses upgrade their data centers, they’re transitioning from traditional 2-layer network architectures to more advanced 3-layer routing frameworks. Protocols like OSPF and BGP are increasingly used to manage connectivity and maintain network reliability. However, certain applications, especially those related to virtualization, HPC, and storage, still rely on 2-layer network connectivity due to their specific requirements.

VXLAN Overlay Network Virtualization

In today’s fast-paced digital environment, applications are evolving to transcend physical hardware and networking constraints. An ideal networking solution offers scalability, seamless migration, and robust reliability within a 2-layer framework. VXLAN tunneling technology has emerged as a key enabler, constructing a virtual 2-layer network on top of the existing 3-layer infrastructure. Control plane protocols like EVPN synchronize network states and tables, fulfilling contemporary business networking requirements.

Network virtualization divides a single physical network into distinct virtual networks, optimizing resource use across data center infrastructure. VXLAN, utilizing standard overlay tunneling encapsulation, extends the control plane using the BGP protocol for better compatibility and flexibility. VXLAN provides a larger namespace for network isolation across the 3-layer network, supporting up to 16 million networks. EVPN disseminates layer 2 MAC and layer 3 IP information, enabling communication between VNIs and supporting both centralized and distributed deployment models.

For enhanced flexibility, this project utilizes a distributed gateway setup, supporting agile execution and deployment processes. Equal-Cost Multipath (ECMP) routing and other methodologies optimize resource utilization and offer protection from single node failures.

RoCE over EVPN-VXLAN

RoCE technology facilitates efficient data transfer between servers, reducing CPU overhead and network latency. Integrating RoCE with EVPN-VXLAN enables high-throughput, low-latency network transmission in high-performance data center environments, enhancing scalability. Network virtualization divides physical resources into virtual networks tailored to distinct business needs, allowing for agile resource management and rapid service deployment.

Simplified network planning, deployment, and operations are essential for managing large-scale networks efficiently. Unnumbered BGP eliminates the need for complex IP address schemes, improving efficiency and reducing operational risks. Real-time fault detection tools like WJH provide deep network insights, enabling quick resolution of network challenges.

Conclusion

Essentially, recent advancements in data center networking focus on simplifying network design, deployment, and management. Deploying technological solutions such as Unnumbered BGP eliminates the need for complex IP address schemes, reducing setup errors and boosting productivity. Tools like WJH enable immediate fault detection, providing valuable network insights and enabling quick resolution of network issues. The evolution of data center infrastructures is moving towards distributed and interconnected multi-data center configurations, requiring faster network connections and improving overall service quality for users.

For detailed information on EVPN-VXLAN and RoCE, you can read: Optimizing Data Center Networks: Harnessing the Power of EVPN-VXLAN, RoCE, and Advanced Routing Strategies.

HPC and Future Networks: Architectures, Technologies, and Innovations

High-Performance Computing (HPC) has become a crucial tool for solving complex problems and pushing the boundaries of scientific research, and various other applications. However, efficient operation of HPC systems requires specialized infrastructure and support. HPC has emerged as an indispensable tool across various domains, capable of addressing complex challenges and driving innovation in fields such as science, meteorology, finance, and healthcare.

Understanding the importance of data centers in supporting HPC is essential, as knowing the three fundamental components—compute, storage, and networking—that constitute high-performance computing systems is crucial.

Facilities in High-Performance Computing

Intensive computations in HPC environments generate substantial heat, necessitating advanced cooling solutions. Efficient cooling prevents overheating, ensuring system stability and prolonging hardware lifespan. Supporting HPC, data centers employ cutting-edge cooling facilities, including liquid cooling systems and precision air conditioning. Moreover, data center architects explore innovative cooling technologies like immersion cooling, submerging servers in special liquids for effective heat dissipation.

Success in HPC data centers relies on a range of specialized equipment tailored to meet the unique demands of high-performance computing. Key components include data center switches, server network cards, high-speed optical modules, DAC and AOC cables, and power supplies.

The Growing Demand for Network Infrastructure in High-Performance Computing

With revolutionary technologies like 5G, big data, and the Internet of Things (IoT) permeating various aspects of society, the trajectory towards an intelligent, digitized society over the next two to three decades is inevitable. Data center computing power has become a powerful driving force, shifting focus from resource scale to computational scale.

To meet the ever-growing demand for computing power, high-performance computing (HPC) has become a top priority, especially as computational cluster scales expand from the petascale to the exascale. This shift imposes increasingly higher demands on interconnect network performance, marking a clear trend of deep integration between computation and networking. HPC introduces different network performance requirements in three typical scenarios: loosely coupled computing scenarios, tightly coupled scenarios, and data-intensive computing scenarios.

In summary, high-performance computing (HPC) imposes stringent requirements on network throughput and latency. To meet these demands, the industry widely adopts Remote Direct Memory Access (RDMA) as an alternative to the TCP protocol to reduce latency and maximize CPU utilization on servers. Despite its advantages, the sensitivity of RDMA to network packet loss highlights the importance of lossless networks.

The Evolution of High-Performance Computing Networks

Traditional data center networks have historically adopted a multi-hop symmetric architecture based on Ethernet technology, relying on the TCP/IP protocol stack for transmission. However, despite over 30 years of development, Remote Direct Memory Access (RDMA) technology has gradually replaced TCP/IP, becoming the preferred protocol for HPC networks. Additionally, the choice of RDMA network layer protocols has evolved from expensive lossless networks based on the InfiniBand (IB) protocol to intelligent lossless networks based on Ethernet.

From TCP to RDMA

In traditional data centers, Ethernet technology and the TCP/IP protocol stack have been the norm for building multi-hop symmetric network architectures. However, due to two main limitations—latency issues and CPU utilization—the TCP/IP network is no longer sufficient to meet the demands of high-performance computing. To address these challenges, RDMA functionality has been introduced at the server side. RDMA is a direct memory access technology that enables data transfer directly between computer memories without involving the operating system, thus bypassing time-consuming processor operations. This approach achieves high bandwidth, low latency, and low resource utilization.

From IB to RoCE

RDMA enables direct data read and write between applications and network cards. RDMA’s zero-copy mechanism allows the receiving end to read data directly from the sending end’s memory, significantly reducing CPU burden and improving CPU efficiency. Currently, there are three choices for RDMA network layer protocols: InfiniBand, iWARP (Internet Wide Area RDMA Protocol), and RoCE (RDMA over Converged Ethernet). Although RoCE offers many advantages, its sensitivity to packet loss requires support from lossless Ethernet. This evolution of HPC networks reflects a continuous pursuit of enhanced performance, efficiency, and interoperability.

Enterprise Innovative Solution: Designing High-Performance Data Center Networks

The architecture of data center networks has evolved from the traditional core-aggregation-access model to the modern Spine-Leaf design. This approach fully utilizes network interconnection bandwidth, reduces multi-layer convergence rates, and is easy to scale. When traffic bottlenecks occur, horizontal expansion can be achieved by increasing uplink links and reducing convergence ratios, minimizing the impact on bandwidth expansion. Overlay networks utilize EVPN-VXLAN technology to achieve flexible network deployment and resource allocation.

This solution draws on the design experience of internet data center networks, adopting the Spine-Leaf architecture and EVPN-VXLAN technology to provide a versatile and scalable network infrastructure for upper-layer services. Production and office networks are isolated by domain firewalls and connected to office buildings, labs, and regional center exits. The core switches of the production network provide up to 1.6Tb/s of inter-POD communication bandwidth and 160G of high-speed network egress capacity, with each POD’s internal horizontal network capacity reaching 24Tb, ensuring minimal packet loss. The building wiring is planned based on the Spine-Leaf architecture, with each POD’s switches interconnected using 100G links and deployed in TOR mode. The overall network structure is more streamlined, improving cable deployment and management efficiency.

Future-Oriented Equipment Selection

When envisioning and building data center networks, careful consideration of technological advancements, industry trends, and operational costs over the next five years is crucial. The choice of network switches plays a vital role in the overall design of data center networks. Traditional large-scale network designs often opt for chassis-based equipment to enhance the overall capacity of the network system, but scalability is limited.

Therefore, for the network equipment selection of this project, NVIDIA strongly advocates for adopting a modular switch network architecture. This strategic approach facilitates rapid familiarization by maintenance teams. Additionally, it provides operational flexibility for future network architecture adjustments, equipment reuse, and maintenance replacements.

In response to the ongoing trend of business transformation and the surge in demand for big data, most data center network designs adopt the mature Spine-Leaf architecture, coupled with EVPN-VXLAN technology to achieve efficient network virtualization. This architectural approach ensures convenient high-bandwidth, low-latency network traffic, laying the foundation for scalability and flexibility.

How FS Can Help

FS is a professional provider of communication and high-speed network system solutions for network, data center, and telecommunications customers. Leveraging NVIDIA® InfiniBand switches, 100G/200G/400G/800G InfiniBand transceivers, and NVIDIA® InfiniBand adapters, FS offers customers a comprehensive set of solutions based on InfiniBand and lossless Ethernet (RoCE). These solutions meet diverse application requirements, enabling users to accelerate their businesses and enhance performance. For more information, please visit FS.COM.

Empowering Your 800G Networks with MTP/MPO Fiber Cables

In the era of ultra-high-speed data transmission, MTP/MPO cables have become a key player, especially in the context of 800G networks. In essence, MTP/MPO cables emerge as catalysts for the evolution toward 800G networks, offering a harmonious blend of high-density connectivity, reliability, and scalability. This article will delve into the advantages of MTP/MPO cables in 800G networks and provide specific solutions for constructing an 800G network, offering valuable insights for upgrading your existing data center.

Challenges Faced in 800G Data Transmission

As a critical hub for storing and processing vast amounts of data, data centers require high-speed and stable networks to support data transmission and processing. The 800G network achieves a data transfer rate of 800 Gigabits per second (Gbps) and can meet the demands of large-scale data transmission and processing in data centers, enhancing overall efficiency.

Therefore, many major internet companies are either constructing new 800G data centers or upgrading existing data centers from 100G, 400G to 800G speeds. However, the pursuit of 800G data transmission faces numerous complex challenges that necessitate innovative solutions. Here, we analyze the intricate obstacles associated with achieving ultra-fast data transmission.

Insufficient Bandwidth & High Latency

The 800G network demands extensive data transmission, placing higher requirements on bandwidth. It necessitates network equipment capable of supporting greater data throughput, particularly in terms of connection cables. Ordinary optical fibers typically consist of a single fiber within a cable, and their optical and physical characteristics are inadequate for handling massive data, failing to meet the high-bandwidth requirements of 800G.

While emphasizing high bandwidth, data center networks also require low latency to meet end-user experience standards. In high-speed networks, ordinary optical fibers undergo more refraction and scattering, resulting in additional time delays during signal transmission.

Limited Spatial Layout

The high bandwidth requirements of 800G networks typically come with more connection ports and optical fibers. However, the limited space in data centers or server rooms poses a challenge. Achieving high-density connections requires accommodating more connection devices in the constrained space, leading to crowded layouts and increased challenges in space management and design.

Complex Network Architecture

The transition to an 800G network necessitates a reassessment of network architecture. Upgrading to higher data rates requires consideration of network design, scalability, and compatibility with existing infrastructure. Therefore, the cabling system must meet both current usage requirements and align with future development trends. Given the long usage lifecycle of cabling systems, addressing how to match the cabling installation with multiple IT equipment update cycles becomes a challenging problem.

High Construction Cost

Implementing 800G data transmission involves investments in infrastructure and equipment. Achieving higher data rates requires upgrading and replacing existing network equipment and cabling management patterns, incurring significant costs. Cables, in particular, carry various network devices, and their required lifecycle is longer than that of network equipment. Frequent replacements can result in resource wastage.

Effectively addressing these challenges is crucial to unlocking the full potential of a super-fast, efficient data network.

Unlocking 800G Power: MTP/MPO Cables’ Key Advantages

The significance of MTP/MPO cables in high-speed networks, especially in 800G networks, lies in their ability to manage the escalating data traffic efficiently. The following are key advantages of MTP/MPO cables:

High Density, High Bandwidth

MTP/MPO cables adopt a high-density multi-fiber design, enabling the transmission of multiple fibers within a relatively small connector. This design not only provides ample bandwidth support for data centers, meeting the high bandwidth requirements of an 800G network, but also helps save space and supports the high-density connection needs for large-scale data transfers.

Additionally, MTP/MPO cables exhibit excellent optical and mechanical performance, resulting in low insertion loss in high-speed network environments. By utilizing a low-loss cabling solution, they effectively contribute to reducing latency in the network.

Flexibility and Scalability

MTP/MPO connectors come in various configurations, accommodating different fiber counts (8-core, 12-core, 16-core, 24-core, etc.), supporting both multimode and single-mode fibers. With trunk and breakout designs, support for different polarities, and male/female connector options, these features allow seamless integration into various network architectures. The flexibility and scalability of MTP/MPO connectors enable them to adapt to evolving network requirements and facilitate future expansions, particularly in the context of 800G networks.

Efficient Maintenance

The high-density and compact design of MTP/MPO cables contribute to saving rack and data room space, enabling data centers to utilize limited space resources more efficiently. This, in turn, facilitates the straightforward deployment and reliable operation of 800G networks, reducing the risks associated with infrastructure changes or additions in terms of cost and performance. Additionally, MTP/MPO cables featuring a Plenum (OFNP) outer sheath exhibit fire resistance and low smoke characteristics, minimizing potential damage and saving on cabling costs.

Scaling the 800G Networks With MTP/MPO Cables

In the implementation of 800G data transmission, the wiring solution is crucial. MTP/MPO cables, as a key component, provide reliable support for high-speed data transmission. FS provides professional solutions for large-scale data center users who require a comprehensive upgrade to 800G speeds. Aim to rapidly increase data center network bandwidth to meet the growing demands of business.

Newly Built 800G Data Center

Given the rapid expansion of business, many large-scale internet companies choose to build new 800G data centers to enhance their network bandwidth. In these data centers, all network equipment utilizes 800G switches, combined with MTP/MPO cables to achieve a direct-connected 800G network. To ensure high-speed data transmission, advanced 800G 2xFR4/2xLR4 modules are employed between the core switches and backbone switches, and 800G DR8 modules seamlessly interconnect leaf switches with TOR switches.

To simplify connections, a strategic deployment of the 16-core MTP/MPO OS2 trunk cables directly connects to 800G optical modules. This strategic approach maximally conserves fiber resources, optimizes wiring space, and facilitates cable management, providing a more efficient and cost-effective cabling solution for the infrastructure of 800G networks.

Upgrade from 100G to 800G

Certainly, many businesses choose to renovate and upgrade their existing data center networks. In the scenario below, engineers replaced the original 8-core MTP/MPO-LC breakout cable with the 16-core version, connecting it to the existing MTP cassettes. The modules on both ends, previously 100G QSFP28 FR, were upgraded to 800G OSFP XDR8. This seamless deployment migrated the existing structured cabling to an 800G rate. It is primarily due to the 16-core MTP/MPO-LC breakout cable, proven as the optimal choice for direct connections from 800G OSFP XDR8 to 100G QSFP28 FR or from 800G QSFP-DD/OSFP DR8 to 100G QSFP28 DR.

In short, this solution aims to increase the density of fiber optic connections in the data center and optimize cabling space. Not only improves current network performance but also takes into account future network expansion.

Elevating from 400G to the 800G Network

How to upgrade an existing 400G network to 800G in data centres? Let’s explore the best practices through MTP/MPO cables to achieve this goal.

Based on the original 400G network, the core, backbone, and leaf switches have all been upgraded to an 800G rate, while the TOR (Top of Rack) remains at a 400G rate. The core and backbone switches utilise 800G 2xFR4/2xLR4 modules, the leaf switches use 800G DR8 modules, and the TOR adopts 400G DR4 modules. Deploying two 12-core MTP/MPO OS2 trunk cables in a breakout configuration between the 400G and 800G optical modules facilitates interconnection.

Furthermore, there is a second connectivity option where the 800G port optical module utilises OSFP SR8, the 400G port uses OSFP SR4 optical module, and the intermediate cables are connected using 12-core MTP® OM4 trunk cables.

These two cabling solutions enhance scalability, prevent network bottlenecks, reduce latency, and are conducive to expanding bandwidth when transitioning from lower-speed to higher-speed networks in the future. Additionally, this deployment retains the existing network equipment, significantly lowering cost expenditures.

ItemProductDescription
1OSFP-DR8-800GNVIDIA InfiniBand MMS4X00-NM compatible OSFP 800G DR8 PAM4 2x DR4 1310nm 500m DOM dual MPO-12/APC NDR SMF optical transceiver, finned top.
2OSFP800-XDR8-B1Generic compatible 800GBASE-XDR8 OSFP PAM4 1310nm 2km DOM MTP/MPO-16 SMF optical transceiver module.
3OSFP-2FR4-800GNVIDIA InfiniBand MMS4X50-NM compatible OSFP 800G 2FR4 PAM4 1310nm 2km DOM dual LC duplex/UPC NDR SMF optical transceiver, finned top.
4OSFP-SR8-800GNVIDIA InfiniBand MMA4Z00-NS compatible OSFP 800G SR8 PAM4 2 x SR4 850nm 50m DOM dual MPO-12/APC MMF NDR finned top optical transceiver module for QM9790/9700 switches.
5OSFP-SR4-400G-FLNVIDIA InfiniBand MMA4Z00-NS400 compatible OSFP 400G SR4 PAM4 850nm 50m DOM MPO-12/APC MMF NDR flat top optical transceiver module for ConnectX-7 HCA.
616FMTPSMFMTP®-16 APC (Female) to MTP®-16 APC (Female) OS2 single mode standard IL trunk cable, 16 fibers, plenum (OFNP), yellow, for 800G network connection.
716FMTPLCSMFMTP®-16 APC (Female) to 8 LC UPC duplex OS2 single mode standard IL breakout cable, 16 Fibers, plenum (OFNP), yellow, for 800G network connection.
812FMTPSMFMTP®-12 (Female) to MTP®-12 (Female) OS2 single mode elite trunk cable, 12 fibers, type B, plenum (OFNP), yellow.
912FMTPOM4MTP®-12 APC (Female) to MTP®-12 APC (Female) OM4 multimode elite trunk cable, 12 fibers, type B, plenum (OFNP), magenta.

For more specific 800G connectivity solutions, please refer to 800G MTP/MPO Cabling Guide.

Conclusion

Ultimately, the diverse range of MTP/MPO cable types provides tailored solutions for different connectivity scenarios in 800G networks. As organizations navigate the complexities of high-speed data transmission, MTP/MPO cables stand as indispensable enablers, paving the way for a new era of efficient and robust network infrastructures.

How FS Can Help

The comprehensive networking solutions and product offerings not only save costs but also reduce power consumption, delivering higher value. Considering an upgrade to 800G for your data center network? FS tailors customized solutions for you. Don’t wait any longer—Register as an FS website member now and enjoy free technical support.

10GBASE-T vs SFP+: Which one is suitable for 10G Data Center Cabling?

When designing a new network architecture based on 10GB Ethernet, we face the challenge of choosing the right equipment to achieve maximum performance and support the future demands of complex network applications.

There are two options for 10Gb Ethernet interconnection: 10GBASE-T and SFP+ solutions (SFP+ and DAC/AOC). 10GBASE-T copper cable modules can span network links of up to 100 meters using cat 6a/cat 7 cables. SFP+ optical devices will support distances of up to 300 meters on multimode fiber and up to 80 kilometers on single-mode fiber.

What are the differences?

SFP+ fiber offers lower latency and cost, and the power consumption of SFP+ solutions is also significantly lower, with the power consumption of 10GBASE-T being approximately three to four times that of the SFP+ solution. Moreover, 1Gb SFP transceivers can be inserted into SFP+ ports, functioning at a speed of 1Gb and linking through optical cables to conventional ports. They can also be plugged into SFP modules that are compatible with 1GBase-T, establishing connections at lower speeds with traditional ports.

However, 10GBASE-T copper cabling provides effective backward compatibility with standard copper network equipment, making optimal use of existing copper infrastructure wiring. Additionally, 10GBASE-T is backward compatible with 1G ports, and many low-bandwidth devices still use 1G ports. Compared to SFP+ solutions for small enterprises, 10GBASE-T is generally more cost-effective and easier to deploy.

Conclusion

In comparison, if scalability and flexibility are crucial for small enterprise applications, then 10GBASE-T cabling is the better choice. However, if power efficiency and lower latency are paramount, then 10G SFP+ cabling is clearly the winner.

Click to explore a more detailed purchasing guide: 10GBASE-T vs SFP+ Fiber vs SFP+ DAC: Which to Choose for 10GbE Data Center Cabling? | FS Community

How FS Can Help

Whether it’s a 10G copper cabling solution or a 10G fiber cabling solution, FS can customize it according to your needs. Click to register now and promptly enjoy your exclusive solution design.

SFP-10G-SR vs SFP-10G-LR: How to choose?

Optical fiber communication technology is crucial for efficient information transmission, significantly enhancing data transmission speeds. Optical modules, a vital component of this technology, play a key role. Among the parameters associated with optical modules, common ones include SFP-10G-SR and SFP-10G-LR. When making a purchase decision, it’s pivotal for you to understand the difference between SFP-10G-SR and SFP-10G-LR before choosing products.

What are the SFP-10G-SR and SFP-10G-LR

SFP refers to hot-pluggable small form factor modules. 10G represents its maximum transmission rate of 10.3 Gbps, which is suitable for 10 Gigabit Ethernet. SR and LR represent the transmission distance of the SFP 10g module.

SFP-10G-SR

SFP-10G-SR is designed for short-distance transmission, typically up to 300 meters over multimode fiber. Using 850 nm wavelength laser and LC bidirectional connector, it is easy to plug and install. The module supports hot-swappable function, which can be safely replaced while the device is running, with stable performance and reliability. In data center networks, SFP-10G-SR is often used for connections between servers to support high-speed data transmission. It is also suitable for enterprise network environments, especially in scenarios with high network performance requirements.

SFP-10G-LR

The SFP-10G-LR is specifically engineered for medium to long-distance transmissions, typically spanning 10 to 40 kilometers over single-mode fiber. Boasting a 1310nm wavelength laser and an LC bidirectional connector, it facilitates effortless and smooth installation. The compatibility of SFP-10G-LR with single-mode optical fiber makes it an ideal solution for fulfilling communication needs in medium to long-distance scenarios, including establishing connections between remote offices. Furthermore, it proves well-suited for constructing network backbones, enabling high-speed data transmission among diverse network devices.

Differences Between SFP-10G-SR and SFP-10G-LR

Transmission Distance: The primary distinction lies in their coverage range, with SFP-10G-SR for short distances and SFP-10G-LR for longer ones.

Fiber Compatibility: SFP-10G-SR works with multimode fiber, while SFP-10G-LR requires single-mode fiber.

Use Cases: SFP-10G-SR is optimal for intra-building connections, while SFP-10G-LR is suitable for inter-building or even metropolitan-area connections.

Wavelength: The SFP-10G-SR uses a laser with a wavelength of 850 nanometers, while the SFP-10G-LR uses a laser with a wavelength of 1310 nanometers.

How to Choose the Right Module

After understanding the difference between SFP-10G-SR and SFP-10G-LR, we will start from typical application scenarios, combining them with your network requirements, to provide guidance on selecting the appropriate SFP 10G optical module for you.

Data Center

When linking servers, storage devices, or network components within the data center, opt for SFP-10G-SR for short-distance connections like in-rack setups. For cross-rack connectivity, SFP-10G-LR is the best choice.

Intra-Enterprise Network

Establishing high-speed connections within the enterprise, such as inter-floor or inter-department links, demands tailored choices. For shorter intra-floor connections, select SFP-10G-SR. Opt for SFP-10G-LR when spanning different floors.

Remote Office/Branch Office

For network connections linking remote or branch offices with the headquarters, SFP-10G-LR is the preferred module due to its suitability for longer distances, ensuring coverage for remote locations.

Inter-City Data Transmission

When establishing high-speed data connections between cities, the preferred choice is SFP-10G-LR, thanks to its compatibility with longer fiber distances, addressing the needs of inter-city connections.

Budget Constraints

If facing budget limitations and the connection distance permits, SFP-10G-SR is generally the more economical option.

Unlocking the Potential of the SFP 10g module with FS Products

The burgeoning era of digitization has spurred a growing demand for optical modules across various sectors, including enterprise networks, data centers, campus networks, and metropolitan area networks. Building on the diverse applications of optical modules, as a premier network solutions provider, FS.COM offers a diverse range of hot-swappable SFP 10G modules designed to maximize uptime and streamline serviceability. Equipped with Digital Optical Monitoring (DDM) capabilities, each unit is meticulously customized and coded for full-function compatibility. FS products undergo rigorous testing and verification to ensure the seamless and reliable operation of your network.

The following table sorts out the products of these two models (SFP-10G-SR and SFP-10G-LR) on the FS. You can choose the most suitable one according to your needs.

ModelSFP-10G-SRSFP-10G-LR
Data Rate (Max)10.3125Gbps10.3125Gbps
Wavelength850nm1310nm
Cable Distance (Max)300m@OM3400m@OM410km
ConnectorDuplex LCDuplex LC
Transmitter TypeVCSELDFB
Cable TypeMMFSMF
TX Power-7.3~-1dBm-8.2~0.5dBm
Receiver Sensitivity< -11.1dBm<-14.4dBm
Power Consumption<1W≤1W
Operating Temperature0 to 70°C (32 to 158°F)0 to 70°C (32 to 158°F)
Application RangeOnly used for short distance connectionsOnly used for long distance connections

Conclusion

In short, which product to choose ultimately depends on your network layout and connectivity needs. The above considerations can help you quickly select the right product to achieve the best performance in your specific network environment. If you would like to learn about other types of SFP 10g modules, you can visit the following resources for more information.

Related resource: Other models of SFP 10g modules

Data Center Layout

Data center layout design is a challenging task requiring expertise, time, and effort. However, the data center can accommodate in-house servers and many other IT equipment for years if done properly. When designing such a modest facility for your company or cloud-service providers, doing everything correctly is crucial.

As such, data center designers should develop a thorough data center layout. A data center layout comes in handy during construction as it outlines the best possible placement of physical hardware and other resources in the center.

What Is Included in a Data Center Floor Plan?

The floor plan is an important part of the data center layout. Well-designed floor plan boosts the data centers’ cooling performance, simplifies installation, and reduces energy needs. Unfortunately, most data center floor plans are designed through incremental deployment that doesn’t follow a central plan. A data center floor plan influences the following:

  • The power density of the data center
  • The complexity of power and cooling distribution networks
  • Achievable power density
  • Electrical power usage of the data center

Below are a few tips to consider when designing a data center floor plan:

Balance Density with Capacity

“The more, the better” isn’t an applicable phrase when designing a data center. You should remember the tradeoff between space and power in data centers and consider your options keenly. If you are thinking of a dense server, ensure that you have enough budget. Note that a dense server requires more power and advanced cooling infrastructure. Designing a good floor plan allows you to figure this out beforehand.

Consider Unique Layouts

There is no specific rule that you should use old floor layouts. Your floor design should be based on specific organizational needs. If your company is growing exponentially, your data center needs will keep changing too. As such, old layouts may not be applicable. Browse through multiple layouts and find one that perfectly suits your facility.

Think About the Future

A data center design should be based on specific organizational needs. Therefore, while you may not need to install or replace some equipment yet, you might have to do so after a few years due to changing facility needs. Simply put, your data center should accommodate company needs several years in the future. This will ease expansion.

Floor Planning Sequence

A floor or system planning sequence outlines the flow of activity that transforms the initial idea into an installation plan. The floor planning sequence involves the following five tasks:

Determining IT Parameters

The floor plan begins with a general idea that prompts the company to change or increase its IT capabilities. From the idea, the data center’s capacity, growth plan, and criticality are then determined. Note that these three factors are characteristics of the IT function component of the data center and not the physical infrastructure supporting it. Since the infrastructure is the ultimate outcome of the planning sequence, these parameters guide the development and dictate the data centers’ physical infrastructure requirements.

Developing System Concept

This step uses the IT parameters as a foundation to formulate the general concept of data center physical infrastructure. The main goal is to develop a reference design that embodies the desired capacity, criticality, and scalability that supports future growth plans. However, with the diverse nature of these parameters, more than a thousand physical infrastructure systems can be drawn. Designers should pick a few “good” designs from this library.

Determining User Requirements

User requirements should include organizational needs that are specific to the project. This phase should collect and evaluate organizational needs to determine if they are valid or need some adjustments to avoid problems and reduce costs. User requirements can include key features, prevailing IT constraints, logistical constraints, target capacity, etc.

Generating Specifications

This step takes user requirements and translates them into detailed data center design. Specifications provide a baseline for rules that should be followed in the last step, creating a detailed design. Specifications can be:

  • Standard specifications – these don’t vary from one project to another. They include regulatory compliance, workmanship, best practices, safety, etc.
  • User specifications – define user-specific details of the project.

Generating a Detailed Design

This is the last step of the floor planning sequence that highlights:

  • A detailed list of the components
  • Exact floor plan with racks, including power and cooling systems
  • Clear installation instructions
  • Project schedule

If the complete specifications are clear enough and robust, a detailed design can be automatically drawn. However, this requires input from professional engineers.

Principles of Equipment Layout

Datacenter infrastructure is the core of the entire IT architecture. Unfortunately, despite this importance, more than 70% of network downtime stems from physical layer problems, particularly cabling. Planning an effective data center infrastructure is crucial to the data center’s performance, scalability, and resiliency.

Nonetheless, keep the following principles in mind when designing equipment layout.

Control Airflow Using Hot-aisle/Cold-aisle Rack Layout

The principle of controlling airflow using a hot-aisle/cold-aisle rack layout is well defined in various documents, including the ASHRAE TC9.9 Mission Critical Facilities. This principle aims to maximize the separation of IT equipment exhaust air and fresh intake air by placing cold aisles where intakes are present and hot aisles where exhaust air is released. This reduces the amount of hot air drawn through the equipment’s air intake. Doing this allows data centers to achieve power densities of up to 100%.

Provide Safe and Convenient Access Ways

Besides being a legal requirement, providing safe and convenient access ways around data center equipment is common sense. The effectiveness of a data center depends on how row layouts can double up as aisles and access ways. Therefore, designers should factor in the impact of column locations. A column can take up three or more rack locations if it falls within the row of racks. This can obstruct the aisle and lead to the complete elimination of the row.

Align Equipment With Floor and Ceiling Tile Systems

Floor and ceiling tiling systems also play a role in air distribution systems. The floor grille should align with racks, especially in data centers with raised floor plans. Misaligning floor grids and racks can compromise airflow significantly.

You should also align the ceiling tile grid to the floor grid. As such, you shouldn’t design or install the floor until the equipment layout has been established.

data center

Plan the Layout in Advance

The first stages of deploying data center equipment heavily determine subsequent stages and final equipment installation. Therefore, it is better to plan the entire data center floor layout beforehand.

How to Plan a Server Rack Installation

Server racks should be designed to allow easy and secure access to IT servers and networking devices. Whether you are installing new server racks or thinking of expanding, consider the following:

Rack Location

When choosing a rack for your data center, you should consider its location in the room. It should also leave enough space in the sides, front, rear, and top for easy access and airflow. As a rule of thumb, a server rack should occupy at least six standard floor tiles. Don’t install server racks and cabinets below or close to air conditioners to protect them from water damage in case of leakage.

Rack Layout

Rack density should be considered when determining the rack layout. More free space within server racks allows for more airflow. As such, you can leave enough vertical space between servers and IT devices to boost cooling. Since hot air rises, place heat-sensitive devices, such as UPS batteries, at the bottom of server racks, heavy devices should also be placed at the bottom.

Cable Layout

Well-planned rack layout is more than a work of art. Similarly, an excellent cable layout should leverage cable labeling and management techniques to ease the identification of power and network cables. Cables should have markings at both ends for easy identification. Avoid marking them in the middle. Your cable management system should also have provisions for future additions or removal.

Conclusion

Designing a data center layout is challenging for both small and established IT facilities. Building or upgrading data centers is often perceived to be intimidating and difficult. However, developing a detailed data center layout can ease everything. Remember that small changes in the plan during installation lead to costly consequences downstream.

Article Source: Data Center Layout

Related Articles:

How to Build a Data Center?

The Most Common Data Center Design Missteps

Data Center Containment: Types, Benefits & Challenges

Over the past decade, data center containment has experienced a high rate of implementation by many data centers. It can greatly improve the predictability and efficiency of traditional data center cooling systems. This article will elaborate on what data center containment is, common types of it, and their benefits and challenges.

What Is Data Center Containment?

Data center containment is the separation of cold supply air from the hot exhaust air from IT equipment so as to reduce operating cost, optimize power usage effectiveness, and increase cooling capacity. Containment systems enable uniform and stable supply air temperature to the intake of IT equipment and a warmer, drier return air to cooling infrastructure.

Types of Data Center Containment

There are mainly two types of data center containment, hot aisle containment and cold aisle containment.

Hot aisle containment encloses warm exhaust air from IT equipment in data center racks and returns it back to cooling infrastructure. The air from the enclosed hot aisle is returned to cooling equipment via a ceiling plenum or duct work, and then the conditioned air enters the data center via raised floor, computer room air conditioning (CRAC) units, or duct work.

Hot aisle containment

Cold aisle containment encloses cold aisles where cold supply air is delivered to cool IT equipment. So the rest of the data center becomes a hot-air return plenum where the temperature can be high. Physical barriers such as solid metal panels, plastic curtains, or glass are used to allow for proper airflow through cold aisles.

Cold aisle containment

Hot Aisle vs. Cold Aisle

There are mixed views on whether it’s better to contain the hot aisle or the cold aisle. Both containment strategies have their own benefits as well as challenges.

Hot aisle containment benefits

  • The open areas of the data center are cool, so that visitors to the room will not think the IT equipment is not being cooled sufficiently. In addition, it allows for some low density areas to be un-contained if desired.
  • It is generally considered to be more effective. Any leakages that come from raised floor openings in the larger part of the room go into the cold space.
  • With hot aisle containment, low-density network racks and stand-alone equipment like storage cabinets can be situated outside the containment system, and they will not get too hot, because they are able to stay in the lower temperature open areas of the data center.
  • Hot aisle containment typically adjoins the ceiling where fire suppression is installed. With a well-designed space, it will not affect normal operation of a standard grid fire suppression system.

Hot aisle containment challenges

  • It is generally more expensive. A contained path is needed for air to flow from the hot aisle all the way to cooling units. Often a drop ceiling is used as return air plenum.
  • High temperatures in the hot aisle can be undesirable for data center technicians. When they need to access IT equipment and infrastructure, a contained hot aisle can be a very uncomfortable place to work. But this problem can be mitigated using temporary local cooling.

Cold aisle containment benefits

  • It is easy to implement without the need for additional architecture to contain and return exhaust air such as a drop ceiling or air plenum.
  • Cold aisle containment is less expensive to install as it only requires doors at ends of aisles and baffles or roof over the aisle.
  • Cold aisle containment is typically easier to retrofit in an existing data center. This is particularly true for data centers that have overhead obstructions such as existing duct work, lighting and power, and network distribution.

Cold aisle containment challenges

  • When utilizing a cold aisle system, the rest of the data center becomes hot, resulting in high return air temperatures. It also may create operational issues if any non-contained equipment such as low-density storage is installed in the general data center space.
  • The conditioned air that leaks from the openings under equipment like PDUs and raised floor tend to enter air paths that return to cooling units. This reduces the efficiency of the system.
  • In many cases, cold aisles have intermediate ceilings over the aisle. This may affect the overall fire protection and lighting design, especially when added to an existing data center.

How to Choose the Best Containment Option?

Every data center is unique. To find the most suitable option, you have to take into account a number of aspects. The first thing is to evaluate your site and calculate the Cooling Capacity Factor (CCF) of the computer room. Then observe the unique layout and architecture of each computer room to discover conditions that make hot aisle or cold aisle containment preferable. With adequate information and careful consideration, you will be able to choose the best containment option for your data center.

Article Source: Data Center Containment: Types, Benefits & Challenges

Related Articles:

What Is a Containerized Data Center: Pros and Cons

The Most Common Data Center Design Missteps