How To Set Up a Trunk Port Between An ESXi4.1 And An HP ProCurve 1810g-24
I was playing with my home lab, which was upgraded recently by the way, and while configuring a trunk port on my HP Procurve 1810g-24 for one of my ESXi4.1 host, I’ve noticed something that is interesting regarding how LACP is implemented at the vSwitch level. Actually it uses static LACP as opposed to dynamic LACP.
Static LACP, that is no LACP actually, does not transmit or process received LACPDUs. The member ports do not transmit LACPDUs and all the LACPDUs they may receive are dropped. In another words links are aggregated in a trunk but that trunk doesn’t exchange any information about the status of that trunk for instance. The CP is dropped from LACP :)
But first let’s focus on the terms you may already heard, terms like port trunking, EtherChannel, link aggregation, LACP and IEEE802.3ad. I’m not going to re-write the definitions so I paste here a portion of the VMware KB 1004048 which explains it all:
- EtherChannel: This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links. For additional information on Cisco EtherChannel, see the EtherChannel Introduction by Cisco
- LACP or IEEE 802.3ad: The Link Aggregation Control Protocol (LACP) is included in IEEE specification as a method to control the bundling of several physical ports together to form a single logical channel. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer (directly connected device that also implements LACP).
- EtherChannel vs. 802.ad: EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than EtherChannel is Cisco proprietary and 802.3ad is an open standard.
[UPDATE 25/11/2010 : the IEEE 802.3ad doesn't seem to exist anymore, actually it has been 'moved' to IEEE 802.1AX standard since 2008.]
I have created a little video that shows you how to configure a trunk on a HP Procurve 1810g-24 and how to properly configure a vSwitch on a ESXi4.1 to work with that trunk. Beware gore video :)
As you can see, it’s quite simple to set up a Trunk Port on a HP Procurve 1810g-24 and to configure a vSwitch to use that trunk in a proper way. And no the video is not gore at all, I was kidding ;)
Anyway there are some caveats I want to talk about. I have already mentioned one above, the static LACP thing, that is no LACP.
There is a second one. You may complain about bandwidth issues after trunking x gigabit ports, don’t you. In a trunk, when two computers start communicating each other, packets will always follow the same path in order to avoid frames being transmitted out-of-order when communicating with a single network device. The decision for the path is made when the first packet is sent out to the destination computer, and all the other packets will follow the same path. Also, and that is important to understand, between these two computers the bandwidth remains the same as before trunking.
Now the next computer sending data may be allocated another path, or may use the same path, the thing is that the more computers sending data over the trunk, the more evenly the load will be balanced and eventually will consume all the bandwidth available. This is called Statistical Load Balancing as opposed to Absolute Load Balancing (or load based balancing). In other words, load balancing is performed on a conversation-by-conversation basis rather than on a frame-by-frame basis. To accomplish this, when making a decision about which adapter will transmit the frame, the algorithm uses the destination IP address of the frame to be transmitted (Route based on the IP hash).
The last caveat, which is not a real caveat actually, it is just by design. All vmnic’s must be active, you know that, but have you noticed that only one vmnic shows up as attached to a network whilst the other vmnic’s are not? See the picture below:
So no worries, this is a normal behavior and it is by design :)
Feel free to comment as usual!