Configuring Multi-NIC vMotion with Cisco 1000v

As part of a project that I’m currently working on the VMware vDS has been replaced with the Cisco Nexus 1000v. Part of that process was to setup multi-NIC vMotion. While it was relatively simple to do, there wasn’t a lot of material out there on it so I felt the need to document it.

Here are the generic steps:

  1. Create a vmkernel port for each physical NIC being used for multi-NIC vMotion per host
  2. Create a vethernet port profile on the 1000v for each corresponding vmkernel interface you created in step 1 
  3. Connect each vmkernel port to its matching port profile
  4. Set channel group mode mac-pinning to auto or relative (more on this later)
  5. Create class map on 1000v for vMotion traffic — this is important so you don’t saturate your uplinks with vMotion traffic


In my setup I’m using Dell R820s with two, dual port Broadcom 57810MF 10Gbe CNAs. This allowed me to do a 4 NIC, multi-NIC vMotion setup. Here is the logical design:



In this example, step 4 from the generic steps above, the ethernet port profile had the following command on it:

    • channel-group auto mode on mac-pinning relative

The mac-pinning portion of the command can be set to auto or relative. What’s the difference?

  • mac-pinning auto – when the channel sub-groups are created using mac-pinning auto, they are numbered relative to the vmnic number. In the example above, the channel sub-groups will be subgroups 4-7. This might work fine, however if the vmnic numbering is different on one of more of your hosts, it could be confusing.
  • mac-pinning relative – when the channel groups are created using mac-pinning relative, they are numbered relative to the number of uplinks starting with 0. In the example above, the channel sub-groups will be subgroups 0-3. This command can be useful if you are connecting multiple hosts to the 1000v with different number of uplinks and using the same ethernet port profile


After reading Chris Wahl’s blog post on whether or not you get a performance increase by enabling jumbo frames on the vmkernel interfaces being used for multi-NIC vMotion, I decided to do some testing of my own. I ran the same test as he did and a few of my own using his basic test methodology, but with 10Gbe and the Cisco 1000v. I’ll be posting those results soon.

Comments 1

  1. Pingback: Jumbo Frames and Multi-NIC vMotion Performance over 10Gbe » ValCo Labs

Leave a Reply

Your email address will not be published. Required fields are marked *