How To Set Up a Trunk Port Between An ESXi4.1 And An HP ProCurve 1810g-24

I was playing with my home lab, which was upgraded recently by the way, and while configuring a trunk port on my HP Procurve 1810g-24 for one of my ESXi4.1 host, I’ve noticed something that is interesting regarding how LACP is implemented at the vSwitch level. Actually it uses static LACP as opposed to dynamic LACP.

Static LACP, that is no LACP actually, does not transmit or process received LACPDUs. The member ports do not transmit LACPDUs and all the LACPDUs they may receive are dropped. In another words links are aggregated in a trunk but that trunk doesn’t exchange any information about the status of that trunk for instance. The CP is dropped from LACP 🙂

But first let’s focus on the terms you may already heard, terms like port trunking, EtherChannel, link aggregation, LACP and IEEE802.3ad. I’m not going to re-write the definitions so I paste here a portion of the VMware KB 1004048 which explains it all:

  • EtherChannel: This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links. For additional information on Cisco EtherChannel, see the EtherChannel Introduction by Cisco
  • LACP or IEEE 802.3ad: The Link Aggregation Control Protocol (LACP) is included in IEEE specification as a method to control the bundling of several physical ports together to form a single logical channel. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer (directly connected device that also implements LACP).
  • EtherChannel vs. EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than EtherChannel is Cisco proprietary and 802.3ad is an open standard.

[UPDATE 25/11/2010 : the IEEE 802.3ad doesn’t seem to exist anymore, actually it has been ‘moved’ to IEEE 802.1AX standard since 2008.]

I have created a little video that shows you how to configure a trunk on a HP Procurve 1810g-24 and how to properly configure a vSwitch on a ESXi4.1 to work with that trunk. Beware gore video 🙂

As you can see, it’s quite simple to set up a Trunk Port on a HP Procurve 1810g-24 and to configure a vSwitch to use that trunk in a proper way. And no the video is not gore at all, I was kidding 😉

Anyway there are some caveats I want to talk about. I have already mentioned one above, the static LACP thing, that is no LACP.

There is a second one. You may complain about bandwidth issues after trunking x gigabit ports, don’t you. In a trunk, when two computers start communicating each other, packets will always follow the same path in order to avoid frames being transmitted out-of-order when communicating with a single network device. The decision for the path is made when the first packet is sent out to the destination computer, and all the other packets will follow the same path. Also, and that is important to understand, between these two computers the bandwidth remains the same as before trunking.

Now the next computer sending data may be allocated another path, or may use the same path, the thing is that the more computers sending data over the trunk, the more evenly the load will be balanced and eventually will consume all the bandwidth available. This is called Statistical Load Balancing as opposed to Absolute Load Balancing (or load based balancing). In other words, load balancing is performed on a conversation-by-conversation basis rather than on a frame-by-frame basis. To accomplish this, when making a decision about which adapter will transmit the frame, the algorithm uses the destination IP address of the frame to be transmitted (Route based on the IP hash).

The last caveat, which is not a real caveat actually, it is just by design. All vmnic’s must be active, you know that, but have you noticed that only one vmnic shows up as attached to a network whilst the other vmnic’s are not? See the picture below:

So no worries, this is a normal behavior and it is by design 🙂

Feel free to comment as usual!

Sources: hp ProLiant network adapter teamingUnderstanding NIC Utilization in VMware ESXVMware ESX, NIC Teaming, and VLAN Trunking with HP ProCurveVMware Virtual Networking Concepts


About PiroNet

Didier Pironet is an independent blogger and freelancer with +15 years of IT industry experience. Didier is also a former VMware inc. employee where he specialised in Datacenter and Cloud Infrastructure products as well as Infrastructure, Operations and IT Business Management products. Didier is passionate about technologies and he is found to be a creative and a visionary thinker, expressing with passion and excitement, hopefully inspiring and enrolling people to innovation and change.
This entry was posted in Uncategorized. Bookmark the permalink.

24 Responses to How To Set Up a Trunk Port Between An ESXi4.1 And An HP ProCurve 1810g-24

  1. nashwj says:

    Well I’m glad this works for someone. I’m going nuts in my lab. I can’t get this to work. As soon as I put two ports in the channel on the switch I drop traffic from half my test clients so it’s not hashing something right. Static mode, select my two ports…then half my pings stop. I’m running the latest code. It’s a simple setup but not working for me.

    If I disconnect each port on the ESXi host one at a time traffic flows fine so I know it’s not like one NIC is having a problem.

  2. deinoscloud says:

    An excellent blog post from Wade Holmes about LACP and vSS/vDS and the ‘static LACP’ support on vSphere 4.x

  3. Pingback: What’s Best EXT3 or EXT4 For My NFS Datastores? « DeinosCloud

  4. jfinley says:

    I too have been experimenting & looking for info
    like this. I’m using a Procurve 1800-G, a little older. Recently
    upgraded the firmware and I do not have the same options as you
    obviously. What I do have is the following under TRUNKS:
    Aggregation Mode (6 options) & select radial box of ports
    DMAC Pseudo Randomized IP-INFO It came default after a firmware
    upgrade to: SMAC XOR DMAC Trunks & Flow Control
    Enable/Disable. That’s it, no other configuration is

    • deinoscloud says:

      Hi and thx for commenting.
      I’m afraid that your switch doesn’t allow trunk in static mode, that is with no LACP.

      I see two options here 1) you install in the hypervisor the Nexus v1000 from VMware which is fully LACP complaint or 2) you upgrade your switch to a 1810 series and set the trunk in static mode.

      There is a third option, that is no trunk at all…

      Happy New Year,

      • jfinley says:

        Well, based on your recommendation, it was cheaper to just buy a 1810. I will test this article internally. Thank you!

        • Paul Ingo says:


          I am pretty sure the HP 1800 can do static trunking, just select whatever aggregation mode is applicable and create the trunk, don’t enable LACP. (software v. 3.04).

          Best regards

  5. dk says:

    Hi , great article …have 2 x HP VSphare4 Servers connected to a HP 18010g-24 switch.
    Have setup 2 trunks , one server works fine as in video but the second server on the second trunk just wont allow traffic , cant ping etc even after enable static capability ticked on switch and applied..
    Have changed the trunk ports etc but still no luck on second trunk / rebooted switch .. Any ideas of what may be stopping the traffice ?

    • deinoscloud says:

      Hi and thx for your comment.
      Check that the speed of both interfaces on the host side is set to auto/auto and that the host correctly pick up 1000/full.

      Check tagging config. Either there is no tagging and therefore trunk port members are set to U for untag or you tag and therefore the trunk port members are set to T.

  6. Nick says:

    Hi Fantastic article really helped me with understanding LACP with vSphere, but perhaps i dont fully understand, I am hoping you can help me with a problem I am now having with connectivity from my vCentre server to my hosts. i noticed that i would get intermittent connection from vCentre to my hosts. When pinging my host from vCentre i get lost packets however not when i ping from my laptop to the hosts. i have a HP Procurve 2510g and vSphere4.1 used your article as a guide and setup static LACP then from vSphere changed the setting to IP hash. i only did this on the vswitch0 but did not set on the management port too i just left that as Port ID. i am thinking the problem is either not setting the management to IP hash see this KB article: or how i have configured the trunk on the Procurve, I think i have set as static LACP:
    Port Type Enabled Mode Flow Ctrl Group Type
    21 1000T | Yes Auto Disable Trk1 LACP

    Please help, thanks in advance.


    • deinoscloud says:

      Hi Nick and thx for your comment.
      I would recommend to make one of the VMNIC adapters as a standby adapter in the mgmt portgroup.
      So at the vSwitch level you leave all active adapters, and at the mgmt portgroup level, you change to active/standby adapters model.
      Test and let us know…

  7. Nick says:

    Thanks for your reply Deino 🙂 i did that and it did actually work there was no packet loss. however i did also speak with VMware and they advised to also setup the mgmt portgroup with IP hash with all adapters active (the same as vSwitch) as this was the recommended solution. problem solved for me! 🙂

  8. Grateful Aussie says:

    Mate – this post was awesome!!!!

    Our work uses 1800-24G switches so the interfaces are a little different but I would have never been able to figure this out without your post.

    Great job!!

    • Daniele Palumbo says:


      how did you do the job on the 1800?
      With which trunk type?


      • PiroNet says:

        Hi and thx for commenting.
        I’m afraid that your switch doesn’t allow trunk in static mode, that is with no LACP.
        You would have to upgrade your switch to a 1810 series and set the trunk in static mode.

        • Daniele Palumbo says:


          i red it in a post, but i were replying to Aussie (2011 / 11 / 19), that said that they are using it…
          Also as Paul Ingo said, in the v3 version of 1800 you have a lot of options for trunking.
          Besides, i have to check which is the LACP static definition…
          which is the difference between LACP static and trunking,
          anche (at the end) HOW vmware route it.
          ip-hash does not seems to me “static lacp”, but something like a static trunk that route the traffic doing a calculation on something, and decide i 0 || 1 interface.

          am i wrong?

          BTW: VMWare is sucking for that issue… implementing dynamic LACP is standard and easy.


          • PiroNet says:

            Hi Daniele,
            Indeed it could be that a new firmware for an HP1800 permits enhanced LACP configuration options. Check with HP web site.
            ip-hash is a load balancing mechanism of the uplinks based on a hash of the source and destination IP addresses of each packet. It is the preferred LB method when using link aggregation on Standard vSwitch (VSS) and Distributed vSwitch (VDS). VSS and VDS are not LACP compliant that’s why you need to set the LACP in static mode (that is no LACP). Only Cisco Distributed vSwitch aka Cisco Nexus 1000v, fully supports LACP. Read more at
            There is a great comparison table of the features for each type of vSwitch.

            Note that the standard vSwitch implementation is a very simple and no brainer. At the moment if you need more features, go for DVS v5 available on ESXi5.0 and if you need LACP go for the Cisco one.


  9. Pingback: A Year Of Blogging In Summary And Season’s Greetings | DeinosCloud

  10. Very well put. Totally shed the light on the difference between Statistical Load Balancing vs. Absolute Load Balancing. I shall go and modify my network with confidence! 🙂

  11. Miguel Munoz says:

    Dear Sir.
    We have TWO (2) identical HP 1810-48G switches and we would like to aggregate the SFP ports 51 & 52, in this case the procedure to do the aggregation is similar to the one used in your video?, do we have to set up the two switches in identically, i.e. the two ports 51 & 52 should be set up in the same way in both switches?.

    Thank you very much indeed for your help.

    M. Munoz.

  12. sebus says:

    Well, iSCSI Nic Teaming is not really something that one should do this way. Instead it should be done this way (as per best practices):

    While the above might work, is certainly not the way, sorry


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s