Sep 182011
 

For this objective I used the following documents:

  • Documents listed in the Tools section

Objective 2.3 – Configure vSS and vDS Policies

**ITEMS IN BOLD ARE TOPICS PULLED FROM THE BLUEPRINT**

Knowledge

  • Identify common vSS and vDS policies
    • When looking at vSS and vDS policies there is some overlap.  Instead of breaking it down by each type, I will list the common policies (at least these are what I envision as common) and at the end of each state which they can be applied to; vSS, vDS or both
    • Common Policies
      • Security – applies to vSS and vDS
        • Policy exceptions include Promiscuous Mode, MAC Address Changes and Forged Transmits
      • Traffic Shaping – can be applied to outbound traffic on a vSS and can be applied to outbound and/or inbound traffic on a vDS
        • Policy exceptions include Average Bandwidth (Kbits/sec), Peak Bandwidth (Kbits/sec) and Burst Size (Kbytes)
      • NIC Teaming (on vSS) and Teaming and Failover (on vDS) – both contain same main policies, but some options are different
        • Policies include Load Balancing, Network Failover Detection, Notify Switches, Failback and Failover Order
    • Some other common policies that only apply to the vDS are Monitoring and Resource Allocation

 

  • Configure dvPort group blocking policies
    • You can configure an individual dvPort group on a vDS to block all ports (this can’t be done on a vSS)
        1. Log in to vCenter using the VI Client
        2. Go to the Networking view by clicking the View menu > Inventory > Networking (or Ctrl+Shift+N)
        3. Right-click on the dvPort group you want to block ports on and select Edit Settings…
        4. Select the Miscellaneous policy under the Policies tree
        5. Change the drop down for Block all ports to YesChanging this option to Yes will shut down all the ports for this dvPort group
        6. Click OK

 

  • Configure load balancing and failover policies
    • The load balancing policies are used to determine how outbound traffic is spread across multiple physical adapters (vmnics).  Inbound load balancing is handled by the physical switch the physical uplinks are connected to
    • Editing these policies for the vSS and vDS are done in two different locations within the VI Client.  I will first explain how to get to them for each type of vSwitch, then explain the policies (policies are the same with one exception, which will be identified)
    • vNetwork Standard Switch (vSS)
        1. Log in to vCenter or directly to the host using the VI Client
        2. Select a host from the left pane and then click the Configuration tab on the right
        3. Select Networking in the left column of the center pane
        4. Click the Properties hyperlink next to the vSS you want to modify
        5. Select the vSwitch or port group you want to modify and click Edit
        6. Select the NIC Teaming tab

NOTE: All settings from the vSwitch are propagated to individual port groups.  Modifying settings on an individual port group will override the settings propagated by the vSwitch

    • vNetwork Distributed Switch (vDS)
        1. Log in to vCenter using the VI Client
        2. Go to the Networking view by clicking the View menu > Inventory > Networking (or Ctrl+Shift+N)
        3. Right-click on the dvPort group you want to configure and select Edit Settings…
        4. Under Policies select Teaming and Failover
    • Load Balancing and Failover Policies
        1. The first Policy Exception is Load Balancing; there are four/five options (vSS and vDS respectively):
          1. Route based on the originating port ID: This setting will select a physical uplink based on the originating virtual port where the traffic first entered the vSS
          2. Route based on IP hash: This setting will select a physical uplink based on a hash produced using the source and destination IP address.  When using IP hash load balancing:
            1. The physical uplinks for the vSS must be in an ether channel on the physical switch
            2. All port groups using the same physical uplinks should use  IP hash load balancing policy
          3. Route based on source MAC hash: This setting is similar to IP hash in the fact that it uses hasing, but it uses hashing based on the source MAC address and does not require additional configuration on the physical switch
          4. Use explicit failover order: This setting uses the physical uplink that is listed first under Active Adapters
          5. Route based on Physical NIC load (vDS ONLY): This setting determines which adapter traffic is routed to based on the load of the physical NICs listed under Active Adapters.  This is my personal favorite as it requires ZERO physical switch configurations and is true load balancing
        2. The next policy exception is Network Failover Detection; there are two option
          1. Link Status only: Using this will detect the link state of the physical adapter.  If the physical switch fails or if someone unplugs the cable from the NIC or the physical switch, failure will be detected and failover initiated.  Link Status only is not able to detect misconfigurations such as VLAN pruning or spanning tre
          2. Beacon Probing: This setting will listen for beacon probes on all physical NICs that are part of the team (as well as send out beacon probes).  It will then use the information it receives from the beacon probe to determine the link status.  This method will typically be able to detect physical switch misconfigurations as initiate a failover.  Do not use beacon probing when using the IP hash load balancing policy
          3. Select Yes or No if for the Notify Switches policy.  Choosing Yes will notify the physical switches to update its lookup tables whenever a failover event occurs or whenever a virtual NIC is connected to the vSS.  If using Microsoft NLB in unicast mode set this setting to No
          4. Select Yes or No for the Failback policy.  Choosing Yes will initiate a failback when a failed physical adapter becomes operational.  If you choose No then a failed physical adapter that becomes operational will only become active again if/when the standby adapter that was promoted fails
        3. The last policy is Failover Order; this has three sections
          1.  Active Adapters: Physical adapters listed here are active and are being used for inbound/outbound traffic.  Their utilization is based on the load balancing policy.  These adapters will always be used when connected and operationa
          2. Standby Adapters: Physical adapters listed here are on standby and only used when an active adapter fails or no longer has network connectivity
          3. Unused Adapters: Physical adapters listed here will not be use
        4. Once finished click OK or Cancel

 

  • Configure VLAN settings
    • VLAN settings on virtual switches allow traffic flowing to/from virtual machines to be a part of a physical VLAN
    • vNetwork Standard Switch (vSS)
        1. Log in to vCenter or directly to the host using the VI Client
        2. Select a host from the left pane and then click the Configuration tab on the right
        3. Select Networking in the left column of the center pane
        4. Click the Properties hyperlink next to the vSS you want to modify
        5. Select port group you want to modify and click Edit
        6. On the General tab you can modify the VLAN ID (optional) if you want to perform VLAN tagging at the vSS layer.  Enter in a VLAN ID for this particular port group
        7. Click OK or Cancel when finished
    • vNetwork Distributed Switch (vDS)
        1. Log in to vCenter using the VI Client
        2. Go to the Networking view by clicking the View menu > Inventory > Networking (or Ctrl+Shift+N)
        3. Right-click on the dvPort group you want to configure and select Edit Settings…
        4. Under Policies select VLAN
        5. There are four options for VLAN type:
          1. None: VLAN tagging will not be performed by this dvPort group
          2. VLAN: Enter in a valid VLAN ID (1-4094).  The dvPort group will perform VLAN tagging using this VLAN ID
          3. VLAN Trunking: Enter a range of VLANs you want to be trunked
          4. Private VLAN: Select a private VLAN you want to use – the Private VLAN must be configured first under the dvSwitch settings prior to this option being configurable
            1. You can learn more about Private VLANs on pages 27-28 of the vSphere Networking document listed in the tools section
        6. Click OK or Cancel when finished

 

  • Configure traffic shaping policies
    • Traffic shaping can be configured for both the vSS and the vDS.  When configuring on the vSS you can configure of for the entire vSwitch and those settings will be propagated down to all port groups (can be overridden per port group); traffic shaping applies only to egress traffic on a vSS.  You can only configure traffic shaping on the vDS per dvPort group; traffic shaping can be applied to egress and ingress traffic on the vDS.
    • Except for the fact that you can configure ingress/egress traffic shapping on the vDS, the policies are the same when configuring a vSS or vDS.  Therefore, I will list how to navigate to traffic shaping separately, but the policies and their configurations will be explained as one
    • vNetwork Standard Switch (vSS)
        1. Log in to vCenter or directly to the host using the VI Client
        2. Select a host from the left pane and then click the Configuration tab on the right
        3. Select Networking in the left column of the center pane
        4. Click the Properties hyperlink next to the vSS you want to modify
        5. Select the vSwitch or port group you want to modify and click Edit
        6. Select the Traffic Shaping tab

NOTE: All settings from the vSwitch are propagated to individual port groups.  Modifying settings on an individual port group will override the settings propagated by the vSwitch

    • vNetwork Distributed Switch (vDS)
        1. Log in to vCenter using the VI Client
        2. Go to the Networking view by clicking the View menu > Inventory > Networking (or Ctrl+Shift+N)
        3. Right-click on the dvPort group you want to configure and select Edit Settings…
        4. Under Policies select Traffic Shaping
    • Traffic Shaping Policies
        1. Once you navigate to the appropriate location you will see four different settings (on a vDS you will see these settings twice; one for ingress and one for egress)
        2. The first option is Status and you can choose Enabled or Disabled.  These should be self-explanatory
        3. Average Bandwidth (defined in Kbits/sec): this setting is used to determine the allowed number of Kbits/sec to traverse each individual port and is averaged over time
        4. Peak Bandwidth (defined in Kbits/sec): Workloads tend to have periods of burst; meaning network traffic will increase for a short period of time.  The number you enter for Peak Bandwidth determines the maximum amount of Kbits/sec that can traverse each individual port
        5. Burst Size (defined in Kbytes/sec): Ports gain a burst bonus when it does not use all of the bandwidth it is allocated.  When the port needs additional bandwidth then defined in Average Bandwidth, it can use its burst bonus.  The Burst Size setting will limit the number of Kbytes gained by the burst bonus
        6. Click OK or Cancel when finished

 

  • Enable TCP segmentation Offload support for a virtual machine
    • TCP segmentation Offload (TSO) can be enabled at the virtual machine level for VMs running the following guest operating systems
      • Microsoft Windows 2003 EE with SP2 (32 bit and 64 bit)
      • RHEL 4 (64 bit)
      • RHEL 5 (32 bit and 64 bit)
      • SUSE Linux Enterprise Server 10 (32 bit and 64 bit)
    • TSO is enabled by default on the VMkernel interface
    • You must use the enhanced vmxnet virtual network adapter
    • If you are replacing an existing virtual adapter be sure to record the network settings and MAC address of the old adapter
        1. Login to vCenter or directly to an ESXi host using the VI Client
        2. Navigate to the Hosts and Cluster view by clicking the View menu > Inventory > Hosts and Clusters (or Ctrl+Shift+H)
        3. Right-click on the virtual machine you want to enable TSO on > click Edit Settings…
        4. On the Hardware tab click Add (if replacing an existing adapter; record network settings and MAC and remove the Network Adapter on the list first)
        5. Select Ethernet Adapter > click Next
        6. For type, choose VMXNET 3 under the dropdown
        7. Specify the Network to connect to using the dropdown and whether you want the adapter to be connected at power on > click Next
        8. Click Finish
        9. Upgrade VMware Tools manually if necessary (VMware tools are required to use the vmxnet virtual adapters)

NOTE: If TSO somehow gets disabled on a VMkernel interface you must delete said interface and recreate it with TSO enabled

  • Enable Jumbo Frames support on appropriate components
    • Jumbo Frames can be enabled at the vSS layer, the vDS layer, the VM layer and the VMkernel interface
    • A jumbo frame is a 9KB (9000 bytes) frame that enables less frames to be sent and push more throughput on a physical interface.
    • In order to take full advantage of jumbo frames your physical infrastructure must not only support them, but be configured for them, end-to-end
    • vNetwork Standard Switch (vSS)
        1. Log in to vCenter or directly to the host using the VI Client
        2. Select a host from the left pane and then click the Configuration tab on the right
        3. Select Networking in the left column of the center pane
        4. Click the Properties hyperlink next to the vSS you want to modify
        5. Select the vSwitch vmkernel interface you want to modify and click Edit
        6. Change the MTU from 1500 to 9000
          1. If modifying the virtual switch this is located on the General tab under Advanced Properties
          2. If modifying a vmkernel interface this is located on the General tab under NIC Settings
          3. Click OK or Cancel when finished
    • vNetwork Distributed Switch (vDS)
        1. Log in to vCenter using the VI Client
        2. Go to the Networking view by clicking the View menu > Inventory > Networking (or Ctrl+Shift+N)
        3. Right-click on the dvSwitch you want to configure and select Edit Settings…
        4. On the Properties tab select Advanced
        5. Change the Maximum MTU from 1500 to 9000
        6. Click OK or Cancel when finished
    • Virtual Machine (VM)
        1. Login to vCenter or directly to an ESXi host using the VI Client
        2. Navigate to the Hosts and Cluster view by clicking the View menu > Inventory > Hosts and Clusters (or Ctrl+Shift+H)
        3. Right-click on the virtual machine you want to enable Jumbo Frames on > click Edit Settings…
        4. On the Hardware tab click Add (if replacing an existing adapter; record network settings and MAC and remove the Network Adapter on the list first)
        5. Select Ethernet Adapter > click Next
        6. For type, choose VMXNET 3 under the dropdown
        7. Specify the Network to connect to using the dropdown and whether you want the adapter to be connected at power on > click Next
        8. Click Finish
        9. Upgrade VMware Tools manually if necessary (VMware tools are required to use the vmxnet virtual adapters)
        10. Configure jumbo frames for the virtual network adapter within the guest operating system
        11. Ensure all physical switches are configured for jumbo frames

 

  • Determine appropriate VLAN configuration for a vSphere implementation
    • There is no blanket VLAN configuration for a vSphere implementation.  As I’m sure you have heard before, it all depends on your environment and what your requirements are.  A few principals to keep in mind though:               
      • There are three VLAN configuration options when it comes to VLAN tagging
        • External Switch Tagging (EST): all VLAN tagging of packets happens at the physical switch and all VLAN IDs in the virtual switches should be set to 0 (zero)
        • Virtual Switch Tagging (VST): all VLAN tagging occurs at the virtual switch.  This requires the ports that the physical uplinks are connected to be configured as a trunk port.  VLAN IDs must be specified at the port group level
        • Virtual Guest Tagging (VGT): VLAN tagging is done by the virtual machine.  The 802.1Q VLAN trunking driver must be installed on the VM for this to work properly.  All the ports on the physical switch connected to the physical adapters on the vSwitch must be configured as trunk ports
    • Consider the three VLAN tagging methods above and apply them to your environment.  If you decide to use Virtual Switch Tagging (VST) be aware the physical ports they are connected to not only must be configured as a trunk, but should not have a native VLAN either or you can run into conflicts (i.e. tagged with the same VLAN by the vSwitch and pSwitch)

Tools

  4 Responses to “Objective 2.3 – Configure vSS and vDS Policies”

  1. re: teaming..

    Only “Route based on IP hash” DOES requires additional configuration on the physical switch and its not stated..

    You have MAC doesnt (true), but IP port takes no pSwitch config either (which is not stated) on either vSS or vDS..

  2. For private vlans, check out: http://www.youtube.com/watch?v=tbG9YboATvA

 Leave a Reply

(required)

(required)


*