What Is ACL (Access Control List) and How to Configure It?

Though the robust network promotes the connectivity among people at every comer of the globe, we may not enjoy its convenience or gain the information we want as easily and casually as it thought to be. Due to the access control list, some paths to a certain server may have been blocked manually. Well, what is access control list? Why does it get in my way to the fantastic world?

What Is ACL (Access Control List)?

ACL stands for Access Control List. It is a list of a series of rules that are specified to permit or deny traffic flow. More precisely, ACL serves to filter data packet based on a given filtering criteria on a router or data switch interface, thereby controlling access to your network or to specific files or folders on your network. How does ACL work? As we all know, when a router receives a packet, it will routinely identify its destination address and find an entry in routing table that can match with it. If succeed, the packet can be forwarded, otherwise, discarded. We can run ACL conditions before or after the router makes forward decision. If deny condition matches, drop the packet immediately, otherwise, move on to the next step as normal.

access control list

There are mainly two types of ACL, namely standard ACL and extended ACL. The former one only specifies the source address while the latter can permit or deny traffic based on both the source and destination addresses as well as the ports (for TCP or UDP), or the IPCMP type (for ICMP).

Why Do We Need Access Control List?

First, it works as a security for your network by filtering the unwanted traffic and blocking specific hosts. In the above scenario, if we exclude the ACL, anyone who knows the right destination address can send his packet through the router with no security policy, and damage may ensue. Given this, you can customize ACL conditions to decide who has access to resources in the network.

Besides, ACLs are used for several other purposes such as prioritizing traffic for QoS (Quality of Services), triggering alert, restricting remote access, debugging and so on.

How to Configure Access Control List?

We’ve produced a video for your better understanding of ACL network and ACL configuration. FS S5800/S5850/S8050 series switches are used in this video. Here are the basic access control list commands.

access contro list 1

This step helps enter the global configuration mode

access contro list 2

In this step, we create an ACL, and its number is “123”. Then we can add rules to the ACL. Please note that ACL number for the standard ACLs has to be between 1—99 and 1300—1999, and extended access list numbers ranges from 100 to 199, and from 2000 to 2699.

access contro list 3

Use the host keyword to specify the host you want to permit or deny. This means that deny tcp host 192.168.1.2 access to 192.168.1.1.

access contro list 4

The command above permits all message.

access contro list 5

Here create a class-map and name it “http”.

access contro list 6

Match the access control list of “123”.

access contro list 7

Create policy-map “web”.

access contro list 8

Associate the class-map “http” with the policy-map.

access contro list 9

Enter the eth-0-1 port on FS S5850-32S2Q 10gbe switch.

access contro list 10

Invoke the policy table “web” in the inbound direction of the interface.

The series of operations ban successfully the address 192.168.1.1 from entering the switch now.

Conclusion

Having read this article, you may get acquainted with access control list and know how to configure it. The ACL network helps prevent others from entering into your private network space while keeping you out from where others don’t let you in. Everyone can adopt it to manage your own networking condition. In this way, FS provides you with good quality equipment like fiber switch and PoE network switch and best solutions.

NAT: Why Do We Need It?

NAT, which is critical to the IPv4 networks we still use today, has been hotly debated as the IPv6 grows with more addresses. However, since the IPv6 is not full-fledged, the existence of NAT still makes sense. Here I will introduce NAT definition and figure how NAT works and why we need it.

What Is NAT?

NAT, known as network address translation, is the method adopted by a firewall or router to assign the public addresses to the devices work in the a private network.

It translates the private IPv4 addresses we use in our internal networks into public IPv4 addresses that can be routed over the internet. As we all know, the private addresses may be occupied by connected local service—computers, game consoles, phones, fiber switches etc. to communicate with the modem/router and other devices on the same network. However, the home network connection uses a single public IP address to access the internet. Given this, NAT is responsible for translating the IP address of every device connected to a router into a public IP address at the gateway. Then those devices can connect to the internet.

NAT(network address translation)

NAT: Why We Need?

Assume that you have 3 PCs, a gigabit Ethernet switch which connects 6 PCs, a 10 gigabit switch connecting 6PCs and one smart phone, two ipads and all of them need to work at the same time, then you need to get each of them an IP address accessible to the Internet. But due to a lack of IPv4 IP address space, it is hard to handle the massive number of devices we use every day. Well, the network address translation, proposed in 1994, has become a popular and necessary tool in the face of IPv4 address exhaustion by representing all internal devices as a whole with a same public address available. Together with its extension named port address translation, the network address translation can conserve IP addresses.

Safety, another issue we may concern when accessing the external internet, can partly be addressed by network address translation which servers as a strict controller of accessing to resources on both sides on the firewall. The hackers from outside cannot directly attack the internal network while the internal information cannot access the outside world casually.

How Does NAT Work

A router carrying NAT consists of pairs of local IP addresses and globally unique addresses, by translating the local IP addresses to global address for outgoing traffic and vice versa for incoming traffic. All these are done by rewriting the headers of data packets so that they have the correct IP address to reach the proper destination.

There are generally two types of NAT: dynamic and static.

In dynamic NAT, we map inside local addresses in internal network to global addresses so that they can access resources on the internet. The router responds to the hosts who want to access the internet with an available public IPv4 address so that they can access the internet.

In static NAT, we usually map an internal local address to a global address so that hosts on public networks can access a device in the internal network.

Conclusion

In a word, before the full transition of IPv6, NAT can guarantee the smooth internet surfing no matter how many devices you’ve got. Knowing what it is and how it works with network address will help you establish a clear understanding of it so that you can make good use if it.

SDN vs. OpenFlow vs. OpenStack: What’s the Difference?

As the network grows, the network equipment producers flourish, bringing many different exclusive products into the market. How to manage or operate so many equipment as the different vendors own diversified CLI and web interface to debug and configure. It’s time to put forward some new technologies, SDN vs. OpenFlow vs. OpenStack to tackle this problem.

SDN VS OpenFlow vs. OpenStack: What Are They?

SDN-Software Defined Network

Software-defined networking (SDN) technology is a new way to cloud computing.To improve network monitoring and performance, SDN is designed to enhance network management and promote programmatically network configuration efficiently. It centralizes network intelligence in one network component by decoupling the forwarding process of network packets (data plane) and the routing process (control plane). SDN is mainly composed by application layer which provides application and service, control layer responsible for unified management and control, and forwarding layer that offers hardware equipment like fiber switches, Gigabit Ethernet switches and routers to forward data. The following table illustrates the advantage of SDN against traditional network.

Software-defined Network vs. Traditional Network

Software-defined Network Traditional Network
Forwarding and control separation Forwarding and control coupling
Centralized control Decentralized control
Programmable Non-programmable
Open interface Closed interface

OpenFlow: the Enabler of SDN

To turn the concept of SND into practical implementation, we need to put into place some protocols, among which OpenFlow is the most desirable one. So what is OpenFlow?

OpenFlow is a communications protocol that empowers a network switch or router to access the forwarding plane over the network. Also it can serve as a specification of the logical structure of the network switch functions. We know that each switch vendors may have their own proprietary interfaces and scripting languages, and this protocol enables them to work coordinately while avoid exposing their technology secret inside switches to the public.

OpenStack

OpenStack is an open source cloud computing management platform project that combines several major components to accomplish specific tasks. Its existence confronts the AWS of Amazon, as it allows all participators to access the source code and share some ideas, if they want to. It is convenient and reliable with strong compatibility and adaptability, gaining support from many vendors.

SDN vs. OpenFlow vs. OpenStack: OpenStack

SDN vs. OpenFlow vs. OpenStack: What’s the Difference?

SDN vs. OpenFlow

SDN and OpenFlow are prone to be confused and misunderstood. Take a look at SDN vs. OpenFlow, the two are indeed interconnected. First of all, as an open protocol, OpenFlow underpins the various SDN controller solutions. The complete SDN solution is taking SDN controller as the core, backed by OpenFlow switches and NFV to offer bountiful SDN app for a new smart, dynamic, open, custom network.

OpenFlow vs. OpenStack

OpenFlow, since its release, has gained achievements in hardware and software support. CISCO, Juniper, Toroki and pronto have all launched network equipment like 10gbe switch, router, and wireless access point which support OpenFlow. In contrast, OpenStack covers many aspects like network, virtualization, operation system, and server. It is an ongoing cloud computing platform.

SDN vs. OpenStack

Network orchestration OpenStack copes with the component organization of a particular group of assets, from open source or closed implementations, thus we can say that it can be considered how a software-defined network is deployed. While SDN control serves like the commander of organizers and deals with maintaining consistent (as far as is feasible) policy across multiple groups of assets, so we deem it much like the “why.”

Conclusion

SDN vs. OpenFlow vs. OpenStack, the three terms that are of far-reaching significance, attract more attention from the public. This article may provide you with some help to know them at the very first step. Till now, the networking technologies are still advancing, knowing what they are at present doesn’t mean the truly master of it. There is still plenty of room left to be explored.

What Is Blank Patch Panel and How to Use It?

Proper cable management is always a must for data center networks to ensure tidy and organized cabling environments. We have introduced fiber optic patch panels, fiber enclosures and other fiber cable management products in previous posts for fiber cabling solution. How about copper cabling solution? This post will introduce copper blank keystone patch panel and its installation method. Also we’ll compare blank patch panel vs preloaded patch panel in order to give you best selection guide for Ethernet cabling.

Cat6 and Cat5e cables are terminated on the same blank patch panel

Figure 1: 12 Cat6 cables and 12 Cat5e cables are terminated on one single 24 port blank patch panel while installed with Cat6 and Cat5e insert modules.

What Is Blank Patch Panel?

Blank keystone patch panel, or unloaded patch panel, is an optional Ethernet patch panel. Different from pre-loaded patch panel with built-in RJ45 ports, blank keystone patch panel is designed with 24/48 reserved holes. The empty slots allow one to install different keystone jackets such as Cat5e/Cat6 insert modules according to his need. Thus the blank patch panel can terminate different cables while different connectors fit on, and one same patch panel enables several types of cables to be connected. All blank keystone patch panels from FS.COM are high density 1U rack mount, no matter 24-port or 48-port. They can easily mounted into a standard 19’’ rack, cabinet or wall bracket. All empty ports are also pre-numbered for easy connection and identification.

keystone jackets or insert modules to customize 24 port blank patch panel

Figure 2: Using different keystone jackets or insert modules to customize 24 port blank keystone patch panel.

What Are the Types of Blank Patch Panel?

Generally FS manufactures two types of blank patch panels, with 24/48-port, STP/UTP and Ethernet/multimedia network cabling for option.

For Ethernet cabling only, take this 24 port blank keystone patch panel. The Ethernet patch panel is an unshielded patch panel with 24 blank slots in a compact 1 U. This RJ45 patch panel is used to manage and organize Ethernet cables such as Cat5e patch cables and Cat6 cables.

To enhance network cabling resiliency, consider for blank keystone/multimedia patch panels. The multimedia blank patch panels come witch 48 port patch panel UTP and 24 port patch panel STP/UTP in 1U rack mount in FS.COM. Different from the aforesaid 24 port Ethernet patch panel, their ports accommodate various snap-in jackets, including RJ45 Ethernet, HDMI audio/video, voice and USB applications. This allows users to customize their patch panels for different schemes.

How to Use Blank Patch Panel for Ethernet Cabling?

  • To use blank keystone patch panels for Ethernet cabling, follow the instructions below.
  • Choose the proper quantity of Cat6 or Cat5e RJ45 insert modules according to your Ethernet cable types. You have RJ45 insert modules shielded in metal silver and unshielded in various colors for option.
  • Inlay Cat6 or Cat5e RJ45 insert modules (from the rear panel to the front) into the empty ports on the blank patch panel.
  • Install the equipped Ethernet patch panel onto a 1U rack with screws and screwdriver.
  • Plug Cat6 cables or Cat5e cables into corresponding Cat6 modules or Cat5e modules.
  • Manage cables with the help of cable management accessories such as cable managers, lacing bars and cable ties.

Blank Patch Panel vs Preloaded Patch Panel

Blank keystone patch panel has advantages of personalized setting and installation, which allows one patch panel to terminate different types of cables as long as corresponding insert modules are installed. Say loading both Cat5e and Cat6 insert modules on a 24 port keystone patch panel, then we can terminate both Cat5e and Cat6 cables into matching ports. So blank patch panel is an ideal choice for skillful operators who want to configure his own patch panel for customized cabling requirements.

To seek for a most user-friendly RJ45 patch panel, go for feed through patch panel instead of blank patch panel. Feed through patch panel is an optimized pre-loaded patch panel, which leaves the troubles of punching down wires to the ports required by traditional punch down patch panel. The feed through 24 port patch panel has built-in RJ45 ports both at the front and rear sides for directly terminating Ethernet patch cables. The front ports are marked with sequential numbers for easy identification. The feed through patch panel is an ideal choice for HD cabling environment with convenient and efficient installation requirements. FS feed through patch panels come with Cat6 patch panels and Cat5e patch panels. The Cat6 patch panels are unshielded only whereas the Cat5e patch panels are STP and UTP available.

Conclusion

Blank keystone patch panel is unloaded copper patch panel, which provides customized configuration with different keystone jackets. The various RJ45 insert modules installed on the Ethernet patch panel allow different cables to be terminated. The 24 port and 48 port keystone/multimedia blank patch panels offer copper cabling solution for Ethernet, video/audio, voice and USB applications. For the choice between blank patch panel vs preloaded patch panel, here’s the reference. Blank patch panel is perfect for operators who prefer to configure network patch panels by themselves to cater for their data center cabling. Preloaded feed through patch panel is a better choice for anyone requiring easy and direct access for Ethernet cabling.

How to Manage Cables in Server Rack?

In data centers, we run all enterprise network equipment (server, storage, network switch, etc.) into the server rack. And various wires such as fiber optic cables, Ca5e/6 Ethernet cables and power cords are spreading all over the floor. It’s rather a disaster to see all these cables tangling together without knowing which ends they are tracing to, which is inconvenient for operators to implement troubleshooting. Besides, interwined wires also cause cooling problem, crosstalk and interference, which causes performance issues. Fragile fibers under neglected management will easily break. All these reasons confirmed the necessity of proper cable management. So how to manage cables in server rack?

cable management accessories installed in server rack

Figure 1: An array of cable management accessories are installed in open server racks to manage cables.

Deploy Proper Server Rack

Above all, estimate your enterprise network scale, cabling numbers and other requirements to choose proper server rack. There are mainly three types of server cabinets in the market, making sure to choose the one for your network environment. All these server racks price are competitive in FS.COM.

  • Open Frame Server Rack

The open frame rack has no sides and doors to restrict it from reaching the open air. It provides easy access, sufficient open space and airflow for cable management, ideal for high-density cabling for server room and data center racks. You’d better to use open server rack in applications that don’t require security protection for cables. 2-post and 4-post are two types of open frame racks. The former requires less depth whereas the latter supports more weight.

  • Enclosed Server Rack

The enclosed server rack is a sever cabinet with front and back doors and side panels. The doors can be locked to prevent intentional sabotage and dust invasion. Black server rack 42U/45U is available in FS.COM to offer you with abundant rack cabling space. FS elaborately designs efficient brush guards on the roof to facilitate airflow and ensure better cooling.

  • Wall Mount Server Rack

Wall mount server rack is used to hold network equipment such as network switch and network accessories such as fiber patch panel. It features functionality to be attached onto the wall to save floor space. FS manufactures 9U/12U 4-post wall mount network cabinets with glass front door. The defect is that wall mount server rack is not as big as other server racks to room large quantity of network devices. In this case bigger network cabinet is the better choice to go. Here is a comprehensive data center cabling solution video guide for your reference.

Deploy Other Cable Management Products in Server Rack

After choosing optimal server racks for your cabling solution, take other cable management accessories into consideration.

  • Deploy right fiber patch panels or cassettes to terminate your fibers, and use matching fiber enclosures to load them and other enclosure accessories such as fiber slack management spool. An intact loaded fiber enclosure ensures safe entry and exist of fiber patch cables and stores an excess of fibers in a compact of 1U/2U/4U.

a loaded FHX fiber enclosure with fiber patch panel for server rack

  • Use cable organizer – horizontal cable manager and vertical cable manager to keep scattered running cables in right place and ensure a neat rack environment. Besides, employ cable ties/zip ties to fix wire bundles.

different types of cable managers for server rack

  • Put emphasis on cable identification tools. Cable ties are available with different colors to help figure different types cables or end devices. Color coded fibers and Ethernet cables are also helpful for recognition. Or you can buy color-coded cable labels with different numbers to mark your wires.

color coded cable ties and cable labels for server rack

Conclusion

Proper cable management cannot be accomplished in one action. First of all, carefully plan your cabling solution on the basis of your data center scale. Then deploying proper server racks or cabinets and other cable management products – fiber enclosure, fiber patch panel, cable organizer/cable manager, cable ties, cable labels to manage cables shipshape and facilitate cable identification. All these jobs done orderly will make a clean and decent server room, and endow cables and the whole systems with security warranty.

Related Article:

Fiber Optic Enclosure: What to Benefit From It?

How to Choose Cable Manager for Rack?

How to Choose Cable Manager for Rack?

In data centers, cable management is of great concern in addition to functional data transmission devices. A mass of cable wires scattering and tangling together is bothersome for operators to implement maintenance. The neglect of effective cable management also leads to cable damage and low performance issues. To make clean and safe entry and exit for each cable, one must employ proper cable rack manager to ensure an organized and neat rack environment. However there are an array of cable manager products in the market. How to choose right cable manager for rack? This post may help.

deploy cable manager for rack to make a neat sever room

Figure 1:Deploying cable manager for rack and other cable management products to make a clean and neat sever room in FS data center.

Overview of Cable Manger for Rack

Cable managers own following functions – organizing and protecting running cables, reducing crosstalk and signal interface, facilitating airflow and cooling, and ensuring a clutter-free data center. In general, there are two types of cable manager for rack: horizontal cable manager and vertical cable manager. Further, each type has different styles available to cater for diverse cabling environment. Other cable management products frequently used in collation with cable manager are fiber optic enclosure, fiber patch panel, cable wires, cable labels and so on.

Types of Cable Manager for Rack

Making clear of different types of cable manager for rack helps you to make a wise decision when buying cable management products for racks.

  • Horizontal Cable Manger

Horizontal cable management panel is often installed in front of equipment such as network switch in parallel. Most cases horizontal cable manager for rack is matched with patch panel and rack-mount enclosure to provide cables a safe and organized pathway from switch ports into the vertical cable manager. And a dozen of wires often bundled with a cable tie to fix loose cables.

  • Vertical Cable Manager

Rack vertical cable management panel is installed erectly in each side of the cabinet walls to take over cables traversing out of the horizontal cable manager. Usually there are several horizontal cable managers for rack placed in different ladders and two bunches of wires are hanging down from each ladder. Thus the bilateral vertical managers play important role to make a clean multilayer cabinet.

Both horizontal cable manager rand vertical cable manager for rack are installed in a server rack or cabinet to run cables away from equipment neatly. To achieve an optimal rack environment, these two cable rack managers are used together in a server cabinet.

Considerations for Choosing Cable Manager for Rack

In addition to the aforesaid horizontal manager vs vertical cable manager for rack, there are other factors for considering.

  • Cable Management Environment

If you should deal with massive cables for a big enterprise data center, then you could consider the perfect combination of horizontal cable manager with vertical cable manager for rack in each of your server room cabinet. If you just handle a small business or office cabling, you can choose to save some money by omitting rack vertical cable management deployment.

  • Rack specification

Server rack usually comes with standard 19-inch wide while depth is flexible in some degree. However rack heights are available with different rack units (1U=1.75 inches) such as 1U, 2U and so on. Accordingly cable manager for rack is designed with different rack units, make sure to buy a right size. Highlights, the bend radius finger bracket can be stacked to reach any heights, ideal choice for high rack environment.

  • Cable Manager Style

1. To prevent equipment from overheating, choose horizontal cable manager with brush strip to ensure better cooling.

horizontal cable cable manager for rack with brush strip

2. For optimal cable care and protection to protect fragile fibers from damage, you can choose horizontal cable manager with finger duct.

horizontal cable cable manager for rack with brush strip

3. To enable flexible routing of massive cords in proper bend radius, pick cable manager with D-ring. It is made with steel for strength and durability, available with 1U and 2 U in FS.COM.

cable manager for rack with D-ring

4. For managing different cable types and quantity, choose different styles of cable manager for rack. For example, vertical cable manager with bend radius finger provides extra-deep cable capacity for applications requiring large thick cable bundles (Ethernet cables: Cat5e/6/7). FS 5U 3’’ wide plastic vertical cable manager with bend radius finger owns ultra-high cable capacity up to 23 Cat6 cables.

cable manager for rack with D-ring

Conclusion

Cable manager for rack is one of the important cable management products in data centers. Making clear of horizontal cable manager vs rack vertical cable management is the first step for right deployment in cable rack management. Considering your cabling environment, rack specification (esp. U size), and various cable manager styles to cater for special requirements will help you to choose the best cable manager for rack.

Fiber Optic Enclosure: What to Benefit From It?

In data centers, there are a variety of cable management accessories used in collocation with enterprise network components, such as fiber optic patch panel, fiber optic enclosure and cable ties. In my previous post – Patch Panel vs Switch: What’s the Difference? – I have introduced the role of fiber optic patch panel as a cable management tool. Fiber optic enclosure/fiber optic box is also a frequently used tool for rack cabling solution. This post will introduce fiber enclosure and what we can benefit from it.

What Is Fiber Optic Enclosure?

Fiber enclosure/fiber spice box may refer to an empty box or an intact unit after installation. A loaded fiber optic box contains installed assembly units to connect and separate various fiber optic cables. Usually fiber optic enclosure unloaded comes with 1U/2U/4U available, which can house corresponding quantity of fiber optic cassettes or fiber patch panels. Some people mention fiber optic enclosure and fiber optic patch panel as the same thing since they are matching devices.

FS slide-out 1U rack mount FHD fiber optic enclosure interior structure in data center application

Figure 1: FS slide-out 1U rack mount FHD fiber optic enclosure interior structure in data center fiber cabling application.

What to Benefit From Fiber Optic Enclosure?

Cable Management Function
  • In general, fiber enclosure functions cable management in data centers for a clean and tidy cabling environment.
  • It houses and fix fiber optic patch panel or fiber optic cassettes in a box for better management and protection.
  • Fiber optic enclosure inside accessories such as fiber slack management spool provides a proper bend radius for cables and helps to route, manage and store fibers.
  • Different types of adapters installed-in enable various incoming fibers to be terminated in high density and protected them from damage.
Optional Design for Different Deployment Scenarios

Fiber optic enclosure has different types available. They may differ from configurations such as fiber enclosure rack mount and fiber enclosure wall mount. Further, rack mount enclosure has different open-close designs, rack unit sizes, and patch panel/cassettes capacities. Different types of fiber optic enclosures cater for different deployment scenarios.

  • Mount Type Option

Wall mount enclosure usually fits for wall mount applications such as cross-connection in telecommunication room. Fiber enclosure rack mount is a very popular one for rack cabling solution in cabinet.

  • Slide-out Design

Rack mount enclosures have two models available in FS.COM: slide-out type and cover removable type. By using slide-out rack mount fiber optic enclosure, you don’t need to remove the enclosure from the rack for internal access. The transparent cover also allows fiber check under cover close state. This facilitates cabling management, maintenance and installation process.

  • High Density

Fiber optic enclosure also provides high density cabling while FHD and FHX fiber enclosure rack mount are designed. FHD fiber enclosures come with 1U, 2U and 4U model, correspondingly housing 4/8/12 FHD fiber adapter panels or 12 x FHD MPO/MTP cassettes and allow terminate 96/192/228 fibers. FHX rack mount enclosure is an ultra HD fiber optic enclosure, which holds up to 144 fibers in a compact 1U form factor to largely save you rack space.

FS IU FHX ultra HD fiber optic enclosure rack mount

Figure 2: FS FHX ultra HD fiber enclosure rack mount holds up to 12 FHX MPO/MTP – 12 cassettes/fiber optic adapter panels (144 fibers) in 1U.

Conclusion

Fiber optic enclosure is a box to load fiber optic patch panel/fiber optic cassettes and other accessories in to provide a cable management solution for fiber cabling. Fiber enclosure ensures a tidy cabling environment and protects fragile fibers from outside damage. Also, the elaborate design of various types of fiber optic enclosures allows different deployment scenarios and better caters for specific requirements. Fiber enclosure rack mount or fiber enclosure wall mount enclosure provides optional mount applications. Slide-out and transparent cover enables convenient inspection and maintenance. FHD rack mount fiber optic enclosure offers high density fiber termination in 1/2/4RU options. FHX ultra HD rack mount enclosure achieves high density fiber capacity in space-saving 1RU.

LAG vs LACP: What’s the Difference?

In the field of Ethernet switch connection, link aggregation is a technology to combine multiple ports in parallel between different network switches. It functions to expand bandwidth cost-effectively and to provide redundancy in link failure. However, the umbrella term “link aggregation” is rather a broad terminology containing various conceptions: Link Aggregation Control Protocol, Link Aggregation Group, MLAG, 802.3ad, 802.1AX, etc. Among them the issue of LAG vs LACP confuses many people. Here we introduce LAG vs LACP in sequence and compare LAG vs LACP to illustrate their relationships and differences.

LAG vs LACP What is the Difference

LAG vs LACP: What Is LAG?

LAG (Link Aggregation Group) is an actual technique or instance for link aggregation. A Link Aggregation Group forms when we connect multiple ports in parallel between two switches and configure them as LAG. As thus LAG builds up multiple links between two switches, which expands bandwidth. Besides, it provides link-level redundancy in network failure and load-balance traffic. Even if one link fails, the remaining links between the two switches will still be running. They also take over those traffic supposed to traverse via the failed one, so data packet won’t get lost.

LAG vs LACP: What Is LACP?

LACP (Link Aggregation Control Protocol) is a control protocol to set up LAGs automatically. So you can choose to build a static LAG without LACP. Or you can choose to set up a dynamic LAG by using LACP. Simply put, LACP is not a link aggregation instance but a protocol for defining it. LACP enables LAG to transfer from static LAG to dynamic LAG, which allows information exchange of the link aggregation between the LAG component network switches. The information is delivered as packet in Link Aggregation Control Protocol Data Units (LACPDUs). And each port on both switches can be configured to be active or passive via the control protocol to be either preferential to transfer LACPDUs or not.

LAG Implementation Scenario

Since LACP is only a protocol for Link Aggregation Group. We’ll omit the LAG vs LACP differentiation to see LAG implementation scenario by FS.COM. Taking the LAG between two gigabit PoE network switches and another 10GbE fiber switch as example. While simply connecting one port on each gigabit PoE switch with one cable, we get 1GE bandwidth. However, when you double link, triple link or higher multiples, the bandwidth will become 2GE, 3GE and so on.

Further, to uplink a backbone core switch, we can use 4 fiber patch cables with corresponding modules to link the 48 port gigabit PoE switch 10GE SFP+ ports and the 10GbE fiber optic switch. Then the uplink bandwidth on the S1600-48T4S expands to 40GE. In the case two LAGs form on the 48 port PoE switch. The link upper limit to form a LAG and the number of LAGs between two switches vary from vendor and switch models.

LAG vs LACP link aggregation implement scenario by fs PoE switch and fiber switch

Figure 1: Linking 4 1GE ports in parallel on FS 48 port PoE switch and 24 port PoE switch to set up a LAG, which boosts bandwidth from 1000Mbps to 4 × 1000Mbps. In this photo two LAGs have been implemented on the FS 48 port PoE switch.

LAG vs LACP : Link Aggregation Advantages to Expand Bandwidth

Whether LAG deploying Link Aggregation Control Protocol or not, it requires no expensive hardware upgrade. Therefore Link Aggregation Group provides a cost-effective solution for bandwidth expansion. Stacking switch is indeed an advanced method to obtain higher bandwidth. However, it is restricted to stackable switch and does not support separate placing. To buy a higher speed switch like a 10GbE switch is also an direct and effective solution. But for ordinary users this hardware upgrade is over budget.

LAG vs LACP: What’s the Difference?

  • Link Aggregation Group is a practical instance of link aggregation whereas LACP is a protocol for auto-configuring and maintaining LAG.
  • LAG without Link Aggregation Control Protocol is a static configuration, in which each pair of ports in a LAG require manual configuration respectively. However, LACP enabled ports are dynamic configuration, which enable to auto-configure into trunk groups when building LAG.
  • When talking about LAG vs LACP, one usually refer to static LAG without LACP vs dynamic LAG with LACP. Generally speaking, dynamic LAG configuration owns advantages over static LAG configuration for automatic failover occur and mutual dynamic configuration. In static link aggregation, LAG cannot detach configuration or cabling errors so as to cause unnecessary network troubles.

Conclusion

LAG vs LACP issue is put forward for the confusing conception between them. LAG is an actual instance for link aggregation. LACP is a control protocol to enable LAG automatically configure network switch ports, detach link failure and activate failover. So LAG encompasses both static LAG configuration and dynamic LAG configuration on the basis of whether employing optional Link Aggregation Control Protocol or not. As a whole, Link Aggregation Group is a cost-effective way to expand bandwidth over switch stacking and other hardware upgrade methods. For minimizing network link failure, LACP enabled dynamic LAG configuration over static LAG is a much better solution to go.

What Is SDN Networking?

More than a decade ago since the concept of SDN proposed on the heel of OpenFlow, software-defined networking has experienced several year’s research. Until 2012, after Google’s announcement of its backbone network successfully operated on OpenFlow, distributing 12 data centers in the world and increasing WAN utilization from 30% to nearly 100%, OpenFlow had proved its identification as a mature and advanced technology to be applied in the data center networks. Correspondingly SDN networking compliant with programmable feature of OpenFlow protocol become a booming networking technology in the big data centers. What is SDN? What are the advantages brought about by SDN networking? This article may help you to understand.

fs 40g 100g open switches support SDN networking

What Is SDN Networking?

Software-defined networking (SDN) is a technology developed to cater for modern high bandwidth and dynamic applications. It is invented in the historical context to change existing stalled networking infrastructure to a dynamic and manageable one. The core technology is on the basis of OpenFlow protocol to divide software from hardware network device, which makes SDN support software defined functionality. As thus software-defined networking infrastructure becomes more flexible and agile. For instance, SDN networking achieves centralized management by one remote monitoring controller. All network components in the structure such as severs, routers, or Ethernet data switch can be easily added and removed in an efficient way.

What Are the Advantages of SDN Networking?

Software Programmable

SDN technology detaches network control from networking hardware devices, making SDN networking directly programmable. Operators can write the SDN program themselves and quickly implement configuration, management, security monitoring and networking optimization. As thus the flexible SDN Networking supports flexible tracking control to adjust traffic agilely and cater for dramatic demands.

Open Standard and Control via SDN Control Plane

SDN networking deploys a centralized intelligent controller, which programs devices like SDN data switch by software, bridges communication between data devices and applications and displays the network panorama in a virtual switch. It leaves out troubles of differentiating network devices and supports customized control. For instance, in a leaf-spine architecture 10 gigabit switch and 40/ 100GbE switch are deployed in data center different layers. A SDN controller in SDN networking can manage each switch synchronously.

SDN switch SDN controller Applciations in SDN networking

Figure 1: SDN switches and other network applications are controlled and communicated via SDN protocol by a centralized SDN controller in SDN network environment.

What Are the Applications of SDN Networking?

In traditional architecture, reconfiguring a network device is a cumbersome task. Driving by the fast changing Internet business applications, modern networking environment requires for functionality to achieve flexible adjustment. SDN networking meets the need, booming and busting in wide applications. Software-defined networking has developed into three networking branches: software-defined mobile networking (SDMN), software-defined wide area networking (SD-WAN) and software-defined local area networking (SD-LAN). Overall SDN is frequently used in data center applications. For instance, deploying SDN switch provided by FS.COM such as FS N5850-48S6Q 48 port 10 gigabit switch with 6 QSFP+ 40GbE ports in SDN networking environment can achieve easy flow control and configuration.

deploying FS 40100GbE switch in SDN networking

Figure 2: Deploying FS 40/100GbE switch in software-defined networking environment as a SDN visibility and security solution.

Conclusion

SDN technology transfers the stagnant situation of internet networking architecture, making SDN networking flexible and agile to business applications. Detaching control functionality from hardware devices (eg. SDN switch), SDN networking achieves quick configuration and management via a centralized SDN controller. Operators can reset an Ethernet switch through SDN protocol in a quick and easy way.

1Gb Backbone vs 10Gb Backbone: Gigabit Switch or 10GbE Switch

The modern world is developing in full speed, so is the telecommunication industry. Not long ago 10GbE switch had been a luxury, so was exclusively affordable to large enterprises. And many individuals and businesses used 10/100Mb switch and could only get to gigabit switch for 1Gb backbone. However, this situation is changing due to price dropping and proliferating market demand. Now more and more SMBs and individuals are using gigabit switch and attempt to access 10Gb switch. As thus questions like whether to deploy gigabit switch as 1Gb backbone or 10GbE switch as 10Gb backbone has stirred heated discussion on many forums. This article is to give some reference for 1Gb backbone vs 10Gb backbone selection guide.

1 Gb backbone gigabit switch vs 10Gb backbone 10GbE switch

What Is 1Gb Backbone Gigabit Switch?

Simply put, 1Gb backbone refers to the networking configuration that gigabit switch is used in the data center as core switch. A typical scenario in 1Gb backbone configuration is to run 10/100Mb access layer switches with 1Gb uplink back to a central gigabit switch. In this case the 1Gb uplink on the 100Mb switch receives the 1Gbps from the switch gigabit, then divides the 1Gb bandwidth to its terminal endpoints. Restricted by the normal port 10/100Mb, max. 100Mb is available for access points. As thus 100Mb switch has gradually been obsoleted by gigabit switch.

To achieve 1000Mbps and bring in PoE capability, modern operators often use gigabit PoE switch as access switch to cooperate 1Gb backbone gigabit switch. Here is a 1Gb backbone deployment scenario by FS.COM: Employing S5800-48F4S 48 port gigabit SFP switch as 1Gb backbone in the data center. Linking two 24 port PoE switches in the office to connect and power IP phones, wireless APs, desktops and laptops. Then running two wires to link two 8 port PoE switches in the warehouse for IP surveillance.

gigabit switch 1Gb backbone

Figure 1: Deploying FS 48 port gigabit switch with 10Gb uplink as core switch and FS 8/24 port gigabit PoE switches as access switches.

What Is 10Gb Backbone 10GbE Switch?

Similarly, 10Gb backbone refers to the configuration that 10GbE switch serves as core switch in the data center. Then running gigabit switches with 10Gb uplink back to the central 10Gb switch. To illustrate 10Gb backbone configuration, here is a deployment scenario. In this case, we deploy S5800-8TF12S 10Gb SFP+ switch as core 10GbE switch in the data center. Using S3800-48T4S 48 port switch and S1600-48T4S 48 port gigabit PoE switch as access switches, we run fiber patch cables to corresponding 10Gb SFP+ uplinks on these access switches. As thus the 10Gb uplink bandwidth can be divided to the access gigabit switch normal port.

Assumption on the case 10 same endpoints are connected to 10 ports on the gigabit Ethernet switch. Then each can obtain max. 1000Mb from the 10Gb uplink bandwidth. In this case the gigabit speed is retained. If the upper layer switch is a gigabit switch, then each endpoint can only get 100Mb.

gigabit switch as access switch in 10Gb backbone configuration

Figure 2: Deploying S5800-8TF12S 10Gb SFP+ switch as 10Gb backbone, while S3800-48T4S 48 port switch and S1600-48T4S 48 port gigabit PoE switch are for gigabit access switch.

1Gb Backbone vs 10Gb Backbone: Gigabit Switch or 10GbE Switch as Core Switch?
Virtualization Application

Generally speaking, deploying gigabit switch for a 1Gb backbone vs 10GbE switch for a 10Gb backbone depends heavily on virtualization application. Even in a small office with only several PCs, demanding applications for high bandwidth may require a backbone 10Gb switch. That is, if you or your employees must deal with high-load pictures and videos every day, 10GbE switch backbone is a must to ensure smooth operation and work efficiency. Or you may easily get stuck in network congestion. Say a regular video conference in a midsize enterprise can randomly drop, which wastes time and drag down schedule process.

Number of Users

Also, pay attention to number of users. Counting all the current endpoints: computers, wireless APs, IP phones, etc. And try to measure the load traffic by plotting utilization. Then take future expansion into consideration. If your backbone gigabit switch ports are already hot for using the most bandwidth provided, and you still need to add office devices, then your network is on the verge of severe congestion. In this case 10Gb switch backbone or higher is the choice to go.

Access Point Bandwidth

All in all, choosing between backbone gigabit switch vs backbone 10GbE switch, there is a bandwidth gap that access endpoints are available. If you deploy 10Gb switch as core switch and gigabit switch with 10Gb uplink as access switch, the normal port on the access switch can get max. 1000Mb bandwidth. However, if the backbone is 1Gb and your access switch is 100Mb, then only max. 100Mb bandwidth can be available in the access switch port. So for 1Gb backbone scenario where one use gigabit switch as core switch, deploying gigabit PoE switch instead of 100Mb switch as access switch is a solution to keep up with 1000Mbps speed.

Conclusion

In summary, 1 Gb backbone gigabit switch vs 10Gb backbone 10GbE switch selection depends on the bandwidth your virtualization applications require. In detail, 10Gb switch shall function as 10Gb backbone in the case mass data transfer is a regular task. Thus the gigabit switch normal ports can share max. 1Gb bandwidth. Otherwise you can remain your 1Gb backbone. But for 1Gb access, deploying gigabit PoE switch to replace your 100Mb access switch is a future-proofing and feasible solution to go. FS SFP switch is a good choice for 1Gb backbone core switch whereas SFP+ switch for 10Gb backbone core switch.