Objective 3.1 – Configure Shared Storage for vSphere

For this objective I used the following documents:

  • Documents listed in the Tools section

Objective 3.1 – Configure Shared Storage for vSphere

**ITEMS IN BOLD ARE TOPICS PULLED FROM THE BLUEPRINT**

Knowledge

  • Identify storage adapters and devices
    • List of storage adapters includes:
      • SCSI adapter
      • iSCSI adapter
      • RAID adapter
      • Fibre Channel adapter
      • Fibre Channel over Ethernet (FCoE) adapter
      • Ethernet adapter
    • Device drivers are part of the VMkernel and are accessed directly by ESXi
    • In the ESXi context, devices, also sometimes called Logical Unit Numbers (LUNs) are represented by a SCSI volume that is presented to the host.  Some vendors expose this as a single target with multiple storage devices (LUNs), and others expose this as multiple targets with one device (LUN) each.  As far as ESXi is concerned, a device is a SCSI volume presented to a host
  • Identify storage naming conventions
    • There are three different types of device identifiers used that make up part of the storage naming convention.  Here they are along with their corresponding device ID formats:
      • SCSI INQUIRY Identifiers:  these will be unique across all hosts and are persistent.  The host uses the SCSI INQUIRY command in order to use the page 83 information (Device Identification) to generate a unique identifier
        • naa.number
        • t10.number
        • eui.number
      • Path-based Identifier:  When a device is queried and does not return page 83 information, the host generates an mpx.path name.  Path represents the path to that particular device.  This is created for local devices during boot and is not unique or persistent (could change upon next boot)
        • Example: mpx.vmhba1.C0:T0:L0
      • Legacy Identifier :  ESXi also generates a legacy name as an alternative with the following format:
        • Vml.number
          • The number are digits unique to the device and can be taken from a part of the page 83 information if it’s available
  • Identify hardware/dependent hardware/software iSCSI initiator requirements
    • A hardware iSCSI adapter offloads the network and iSCSI processing from the host.  There are two types of hardware iSCSI adapters; dependent hardware iSCSI adapter and independent hardware iSCSI adapter (ensure these are listed on the HCL)
      • Dependent Hardware iSCSI Adapter
        • these types of adapters depend on VMware networking and the iSCSI management interfaces within VMware
        • dependent upon the host’s network configuration for IP and MAC
      • Independent Hardware iSCSI Adapter
        • These types of adapters are independent from the host and VMware
        • Provides its own configuration management for IP and other network address assignment
    • The software iSCSI adapter is built into VMware’s code, specifically the VMkernel.  Using this type of adapter you can connect to iSCSI targets using a standard network adapter installed on the host.  Since this is a software adapter, network processing and encapsulation are performed by the host, which does use host resources
  • Compare and contrast array thin provisioning and virtual disk thin provisioning
    • Virtual Disk Thin Provisioning
      • Allows you to create virtual disks of a logical size that initially differs from the physical space used on a datastore.  If you create a 40GB thin disk, it may initially use only 20GB of physical space and will expand as needed up to 40GB
      • Can lead to over-provisioning of storage resources
    • Array Thin Provisioning
      • Thin provision a LUN at the array level
      • Allows you to create a LUN on your array with a logical size that initially differs from the physical space allocated—can expand up to logical size over time
      • Array thin provisioning is not ESXi aware without using the storage APIs for array integration (VAAI).  With a VAAI capable array, the array can integrate with ESXi, which at that point ESXi is aware that the underlying LUNs are thin provisioned
      • Using VAAI you can monitor space on the thin provisioned LUNs and tell the array when files are freed (deleted or removed) so the array can reclaim that free space
    • My opinion is, if your array supports array thin provisioning and VAAI then use array thin provisioning and thick disks within vSphere.  Even though you are choosing a thick disk for your virtual disk type, it is still thin by proxy of array thin provisioning
  • Describe zoning and LUN masking practices
    • Zoning and LUN masking are somewhat similar in the fact that they are used for access control between different objects and devices that may or may not need to communicate with each other
      • Zoning – Use single-initiator zoning or single-initiator-single-target zoning (more restrictive).  Each vendor will have different zoning practices/best practices
        • Defines which Host Bus Adapters (HBAs) can connect to which targets on the SAN.  Objects that aren’t zoned to one another, or are outside of a particular zone aren’t visible
        • Reduces the number of LUNs and targets presented to a particular host
        • Controls/isolates paths in your SAN fabric
        • Prevents unauthorized systems from accessing targets and LUNs
      • LUN Masking – exact same thing as zoning, but applied only for LUN-host mapping
        • Limits which hosts can see which LUNs
        • Can be done at the array layer or the VMware layer
  • Scan/Rescan storage
    • There are many different situations in which storage is Scanned/Rescanned; here are a few
      • When adding a new storage device, storage will be scanned/rescanned afterwords; a scan for new Storage Devices will be done and a scan for new VMFS volumes will initiate
      • After adding/removing iSCSI targets
      • You can perform a Rescan manually by performing the following steps:
        1. Log in to vCenter or directly to the host using the VI Client
        2. Select a host from the left pane and then click the Configuration tab on the right
        3. Select Storage in the left column of the center pane
        4. Click the Rescan All… hyper link on the top right
        5. Select which items you want to scan for; Scan for New Storage Devices  and Scan for New VMFS Volumes (both are checked by default)
        6. Click OK and wait for the scan to complete, after which point any new Storage Devices or VMFS Volumes that exist will be displayed and available
  • Identify use cases for FCoE
    • FCoE adapters are used to access fibre channel storage.  FCoE encapsulates FC frames into Ethernet frames and uses 10Gbit lossless Ethernet as transport to the storage array
    • Like the iSCSI adapter, there are two types of FCoE adapters; software and hardware
      • Software FCoE Adapter:
        • Uses the native FCoE protocol stack within ESXi to process the FCoE protocol
        • Requires a physical NIC that has I/O offload and Data Center Bridging (DCB) capabilities
        • Maximum of 4 FCoE software adapters per host
      • Hardware FCoE Adapter:
        • Specialized adapter that carries Ethernet and FC over the same connection (SFP or SFP+)
        • For Ethernet, the hardware FCoE adapter appears as a vmnic in the networking area of ESXi
        • For Fibre Channel, the hardware FCoE adapter appears as a vmhba in the storage area of ESXi
    • When would use a hardware or software FCoE adapter?
      • When your datacenter supports 10Gbit Ethernet
      • When you want to reduce your footprint inside your hosts, as well as reduce the cable count coming from each host
      • When you have existing DCB technologies in your datacenter (i.e. Cisco Nexus 5K/7K or Cisco MDS)
  • Create an NFS share for use with vSphere
    • This is going to be subjective to your particular storage device, but the basic steps are:
      1. Create a storage volume
      2. Create a folder on that storage volume
      3. Create a share for that folder
      4. Allow the IP of your host(s) to access the storage
      5. Give the IPs of your host read/write access to the share you created
  • Connect to a NAS device
    • Connecting to a NAS device is similar to creating a VMFS datastore; here are the steps
      1. Log in to vCenter or directly to the host using the VI Client
      2. Select a host from the left pane and then click the Configuration tab on the right
      3. Select Storage in the left column of the center pane
      4. Click the Add Storage… hyperlink
      5. Choose Network File System > click Next
      6. Enter in the Server, Folder, and what the name of the Datastore you want
      7. Click Next
      8. Click Finish
  • Enable/Configure/Disable vCenter Server storage filters
    • There are 4 different storage filters in vSphere 5 and they are all enabled by default
      • config.vpxd.filter.vmfsFilter
        • Filters out storage devices or LUNs that are already used by a VMFS datastore on any host managed by vCenter.  These LUNs will not have the option to be formatted with as a new VMFS datastore and cannot be used as a RDM
      • config.vpxd.filter.rdmFilter
        • Filters out any LUNs already referenced as a RDM for any host managed by vCenter
      • config.vpxd.filter.SameHostAndTransportsFilter
        • Filters out LUNs that are unable to be used as a VMFS datastore extent
          • LUNs that aren’t exposed on all the hosts that the datastore you are trying to extend is exposed to
          • LUNs that are using a different storage type than the original datastore (datastore using local storage can’t use an iSCSI extent to extend the datastore)
      • config.vpxd.filter.hostRescanFilter
        • This filter, when enabled, automatically rescans and updates VMFS datastores after you perform datastore management operations
    • How to Enable/Configure/Disable
      1. Log in to vCenter using the VI Client
      2. Click the Administration menu from the menu bar
      3. Select vCenter Server Settings
      4. Choose Advanced Settings on the left
      5. At the bottom in the textbox labeled Key: enter in the value of the storage filter you want to enable or disable (.vmfsFilter, .rdmFilter, .SameHostAndTransportFilter or .hostRescanFilter)
      6. In the textbox labeled Value: type True to enable it or False to disable it
      7. Click Add
      8. To edit a key that is already added in the displayed list click on the current value for the key you want to configure and change it (True to enable, False to disable)
      9. Click OK or Cancel when finished
  • Configure/Edit hardware/dependent hardware initiators
    • Independent Hardware iSCSI Adapters
      1. Install the adapter based vendor documentation
      2. Verify the adapter is installed correctly and configure it:
      3. Log in to vCenter or directly to the host using the VI Client
      4. Select a host from the left pane and then click the Configuration tab on the right
      5. Select Storage Adapters in the left column of the center pane
      6. If installed properly, you will see the new adapter in this list
      7. Select the newly installed adapter and click the Properties… hyperlink
      8. From here you can change the default iSCSI name, alias and IP settings
      9. Click OK when finished
  • Dependent Hardware iSCSI Adapters:  When you install a dependent hardware iSCSI adapter you will be presented with a standard network port and a storage adapter.  To Configure:
      1. Determine the association between the dependent hardware adapter and the physical NIC
      2. Find the physical NIC listed under Network Adapters that is associated with your dependent hardware adapter, you’ll need this for later in the configuration
      3. Log in to vCenter or directly to the host using the VI Client
      4. Select a host from the left pane and then click the Configuration tab on the right
      5. Select Storage Adapters in the left column of the center pane
      6. If installed properly, you will see the new adapter in this list
      7. Select the newly installed adapter and click the Properties… hyperlink
      8. Select the Network Configuration tab and click Add
      9. Add the network adapter that corresponds to the iSCSI adapter listed
      10. Click OK
  • Enable/Disable software iSCSI initiator
    • Enable software iSCSI initiator:  If you haven’t already added a software iSCSI initiator, do so now:
      1. Log in to vCenter or directly to the host using the VI Client
      2. Select a host from the left pane and then click the Configuration tab on the right
      3. Select Storage Adapters in the left column of the center pane
      4. Click the Add… hyperlink
      5. Select Add Software iSCSI Adapter
      6. Click OK twice
      7. The newly added software iSCSI adapter should show in the list and is enabled by default
      8. To disable, highlight the software iSCSI adapter and click the Properties… hyperlink on the bottom right
      9. Click Configure…
      10. Uncheck the Enabled checkbox
      11. Click OK
      12. Click Close
  • Configure/Edit software iSCSI initiator settings
      1. Log in to vCenter or directly to the host using the VI Client
      2. Select a host from the left pane and then click the Configuration tab on the right
      3. Select Storage Adapters in the left column of the center pane
      4. Highlight the software iSCSI adapter you want to configure and click the Properties… hyperlink on the bottom right
      5. Click Configure…
      6. Here you can change the Status (enabled or disabled), the iSCSI Name and the iSCSI Alias
      7. Click OK or Cancel when complete
      8. On the Network Configuration tab you can configure VMkernel Port Bindings (will go over in the next section)
    • Dynamic Discovery tab
      1. Click Add
      2. Enter in the iSCSI target you want to dynamically discover in the iSCSI Server field
      3. Leave the Port set to 3260 unless you have changed it in your environment
      4. Skip CHAP… if you are using it in your environment and do not have it configured to use it’s parent (we will cover CHAP global configuration in a future section)
      5. Click OK
      6. Once the dynamic discovery is complete all targets for that server should populate in the Static Discovery tab
    • Static Discovery
      1. Click Add
      2. Enter in the iSCSI Server IP, the Port and the iSCSI Targert Name
      3. Skip CHAP… if you are using it in your environment and do not have it configured to use it’s parent (we will cover CHAP global configuration in a future section)
      4. Click OK
      5. Click Close once complete
  • Configure iSCSI port binding
      1. Log in to vCenter or directly to the host using the VI Client
      2. Select a host from the left pane and then click the Configuration tab on the right
      3. Select Storage Adapters in the left column of the center pane
      4. Highlight the software iSCSI adapter you want to configure and click the Properties… hyperlink on the bottom right
      5. Select the Network Configuration tab
      6. Click Add…
      7. Select the VMkernel Adapter that corresponds to the Physical Adapter you want to bind
      8. Click OK

NOTE: When using the dependent hardware iSCSI adapter the only VMkernel interface that will display in the list is the one associated with the physical NIC for that dependent hardware adapter

  • Enable/Configure/Disable iSCSI CHAP
    • There are two types of CHAP you can enable within the iSCSI initiator; one-way CHAP, in which the target (a SAN typically) authenticates the host connecting to it, and Mutual CHAP, in which the host also authenticates the target.  Mutual CHAP is the most secure, and here is how to configure both:
    • One-Way CHAP (target authenticates host)
      1. Log in to vCenter or directly to the host using the VI Client
      2. Select a host from the left pane and then click the Configuration tab on the right
      3. Select Storage Adapters in the left column of the center pane
      4. Highlight the software iSCSI adapter you want to configure and click the Properties… hyperlink on the bottom right
      5. Click the CHAP… button
      6. Select an option
        • Do not use CHAP – CHAP will not be used
        • Do not use CHAP unless required by target – CHAP will only be used if its required by the back-end storage
        • Use CHAP unless prohibited by target – CHAP will always be used unless the back-end storage is not configured for it
        • Use CHAP – CHAP will be used all the time
      7. Either check the Use initiator name checkbox to use the initiator name (uses the IQN of the adapter) as a login, or enter a name in the Name field
      8. Enter in the CHAP secret in the Secret field
  • Mutual CHAP (host authenticates target)
    • In order to use Mutual CHAP you must have the normal CHAP set to the Use CHAP option or else the only option available for Mutual CHAP will be Do not use CHAP
      1. Log in to vCenter or directly to the host using the VI Client
      2. Select a host from the left pane and then click the Configuration tab on the right
      3. Select Storage Adapters in the left column of the center pane
      4. Highlight the software iSCSI adapter you want to configure and click the Properties… hyperlink on the bottom right
      5. Click the CHAP… button
      6. Select an option
        • Do not use CHAP – CHAP will not be used
        • Use CHAP – CHAP will be used all the time
      7. Either check the Use initiator name checkbox to use the initiator name (uses the IQN of the adapter) as a login, or enter a name in the Name field
      8. Enter in the CHAP secret in the Secret field
  1. NOTE: the Secret for CHAP and Mutual CHAP must be different
  • Determine use case for hardware/dependent hardware/software iSCSI initiator
    • Independent hardware iSCSI initiator
      • If you have a very heavy iSCSI environment with a lot of I/O (OLTP workloads for example) you may want to use a hardware iSCSI initiator.  This will off-load all network processing to the physical NIC, which will be more efficient and free up resources on the physical host
    • Dependent hardware iSCSI initator
      • You may already have NICs that support this option so there is no reason to buy another one
      • If you are in a high iSCSI I/O environment, a dependent hardware iSCSI initiator may work as iSCSI traffic bypasses the networking stack and goes straight to the hardware adapter, while the network portion of the adapter uses VMkernel networking.  This leads to a lower footprint, being able to use one adapter for both functions
    • Software iSCSI initiator
      • You can leverage existing Ethernet adapters and run networking and iSCSI in the same adapter; VMkernel processes all networking and iSCSI traffic
      • Low cost
  • Determine use case for and configure array thin provisioning
    • Configuring array thing provisioning is going to be different for each type of array so you should consult the vendor documentation in order to configure array thin provisioning
    • Use Cases
      • Uniformity – once you provision it for the LUN, it won’t matter if the virtual disk created is thick or thin, it will always be thin because the LUN is thin provisioned
      • Less overhead – when integrated with storage APIs the host can inform the array when datstore space is freed up and allow the array to reclaim the freed blocks
      • Ease of use – allows an administrator to easily monitor space usage on thin provisioned LUNs

Tools

Leave a Reply

Your email address will not be published. Required fields are marked *

*