Saturday, November 24, 2007

EtherChannel Load Balancing

Whew! We just had a very busy week prior to the Thanksgiving weekend. I got loads to do, and currently stuck with case reviews. Part of my job is to do case reviews so that I can check if any of my engineers is stuck in a particular case. If I find one, I review it and try to figure out how I can best help my engineer to move forward with that case.

I haven’t done a lot yet when I reviewed a case from one of my newest engineers. He’s been stuck with a customer who wanted to ‘equally’ load-balance Switch traffic between two switch ports connected to another switch. The customer is currently having 100% utilization in one link, and only 10% in another link.

Try as he may, my engineer was having a hard time explaining to the customer that it is impossible to load-balance EtherChannel links in exact 50-50 split. I see a coaching opportunity here since EtherChannel is one of the fundamental topics that we train our new hire engineers on. To think that I even reinforced it with a special training session! =(

Enough of the angry outburst for now, and let me share what I know about EtherChannel Load Balancing in general.


EtherChannel Introduction

Before we delve into the EtherChannel Load Balancing topic, let’s have an introduction first on what is EtherChannel, and why do we need it on our Switches.

By definition, EtherChannel (EC) offers bandwidth scalability within a Local Area Network (LAN) by providing up to 800 Mbps, 8 Gbps, or 80 Gbps of aggregate bandwidth for a Fast EtherChannel (FEC), Gigabit EtherChannel (GEC), or 10 Gigabit EtherChannel (10GEC) connection, respectively.

Under normal (default) configuration, redundant connection between Switches will be dealt by Spanning Tree Protocol (STP). STP, which prevents ‘switching loops’, will block any redundant connection until there’s only a single connection or link between the Switches. You can just imagine that these blocked links are potential additional bandwidth, but are currently wasted since no traffic will be allowed to pass through them.

With EtherChannel, these redundant links will be aggregated (or ‘channelized’) in order to maximize bandwidth utilization. Switches will treat EC links as a single logical connection, thus it can function as either an access or a trunk ‘logical’ port. EC will also show up as a single port in STP, and therefore, the redundant ports under an EC link will have the same STP state.

Aside from better bandwidth utilization, EtherChannel will also offer ‘built-in redundancy’. This means that if any of the links aggregated by EC fails, the other links will stay up and pick up the traffic from the failed link. This results to negligible traffic loss whenever a link fails. Better yet, there won’t be any STP re-convergence that will take place since STP still treat these aggregated ports as a single logical link.


EtherChannel Load Balancing

To implement Load Balancing, EtherChannel translates the addresses in a packet (either the MAC address, the IP address, or the port number) from their binary form to a numerical value. This numerical value corresponds to one of the links in the EC in order to distribute packets in all the links in the EC.

EtherChannel Load Balancing (or frame distribution) uses a Cisco−proprietary hashing algorithm. This algorithm is deterministic that when you use the same addresses and session information, you always hash to the same port in the EC. This will prevent out−of−order packet delivery.

The hash algorithm calculates a numerical value between 0 and 7. Switch ports in the EtherChannel will then correspond to these numerical values. The port setup includes a mask which indicates which values the port accepts for transmission. If an EC uses the maximum number of switch ports in a single EtherChannel (8 ports), each port accepts only one value. But if you have less number of switch ports in the EC, each port accepts more than one value. The table below shows the number of numerical values that will correspond to switch ports (based on the number of switch ports on a given EC link):

No. of Ports in an EC link

Load Balancing

8

1:1:1:1:1:1:1:1

7

2:1:1:1:1:1:1

6

2:2:1:1:1:1

5

2:2:2:1:1

4

2:2:2:2

3

3:3:2

2

4:4



EtherChannel will only aggregate the bandwidth of up to eight compatible or similarly configured switch ports into a single logical link. And in newer Cisco Catalyst Switches, the Load Balancing policy can be based on MAC address (L2), on IP address (L3), or on a port number (L4). Aside from that, the policy can also be based on either the source address, destination address, or both.

Please note that you cannot specify or manually choose a link that a particular traffic flow uses. You can only influence the Load Balancing by choosing a ‘distribution method’ that will give the best distribution. If you’ll ask about the hash algorithm, this cannot be configured or changed to load balance the traffic among the links in an EtherChannel. In the end, there’s a need to experiment a bit in order to select the best distribution method for a specific EtherChannel.


Related Commands

Here are some of the commands that will with your EtherChannel Load Balancing needs:

For CatOS:

Check Load Balancing (Frame Distribution) Policy

show port channel info

Determine the port for use in the EC to forward traffic

show channel hash < {src_ip_addr | dest_ip_addr | src_mac_addr | dest_mac_addr | src_port | dest_port} [dest_ip_addr | dest_mac_addr | dest_port]>


For IOS:

Check Load Balancing (Frame Distribution) Policy

show etherchannel load−balance

Determine the port for use in the EC to forward traffic

test etherchannel load−balance interface number> <{ip | l4port | mac} [source_ip_add | source_mac_add | source_l4_port] [dest_ip_add | dest_mac_add | dest_l4_port]>


*For more information, you can visit this link from Cisco.com

Saturday, November 17, 2007

Man vs (Networking) Machines

In our Technical Support Team, we consider what we do everyday as a learning experience. There's always these moments wherein we already thought we have learned everything. But then again, a customer will point out a new angle to an old/known issue (which we have encountered countless times before), and we'll feel astonished we haven't thought of it that way.

Sometimes, I can't help but think that there's a war going on. You, or any network professionals, may even say that this is a crusade. Yup. A crusade for your organization's network to stay 'up and running'. Or else, your boss will surely fire you. =)

Have you noticed that no matter how much you or your team take care of your network, they seem to always give you a problem anytime they want? Have you shouted once, and delivered one of the classic lines from old war movies ("Don't die on me, g'dammit!")?

Well, we hear a lot of these 'war stories' everyday. You can even consider us as the reinforcement whenever the situation has worsen for the customer. And starting today, I'll give dispatches from the 'frontline', and hope everyone else learns from our experience.

Welcome to Frontline Express! If you want fresh updates, subscribe thru email or one of the available readers. Cheers!