Objective 3.2 – Configure the Storage Virtual Appliance for vSphere

For this objective I used the following documents:

  • Documents listed in the Tools section

Objective 3.2 – Configure the Storage Virtual Appliance for vSphere

**ITEMS IN BOLD ARE TOPICS PULLED FROM THE BLUEPRINT**

Knowledge

  • Define Storage Virtual Appliance (VSA) architecture
    • Below is how VMware describes their two node VSA Cluster Architecture from page 9 of the VMware vSphere Storage Appliance Installation and Configuration:

 

    • As depicted in the diagram above, the components needed for the VSA are:
      • Physical hosts with local storage running ESXi 5.0 (can only be configured in a two or three node cluster)
      • In clusters using only two nodes (like above) the VSA Cluster Service runs on the vCenter machine.  In a three node cluster the VSA Cluster Service is not required
      • Each host has an active volume and a replica volume of one of the other hosts
    • Networking for VSA
      • Network traffic is broken up into front-end and back-end traffic
        • Front-end traffic
          • Provides communication between each VSA member cluster and the VSA cluster service
          • Provides communication between each VSA cluster member and the VSA manager
          • Provides communication between ESXi and the VSA volumes
        • Back-end traffic
          • Provides clustering communication between all VSA cluster members
          • Provides the network for vMotion traffic between hosts running the VSA
          • Provides replication between a volume and its replica that’s located on another host
      • Each VSA has two virtual NICs, one for front-end traffic and one for back-end traffic; back-end uses private IP space of 192.168.*.*
    • How it works:
      • Each VSA has two volumes; its own active volume and a replica volume for another VSA.  This is true in a two or three node cluster.  Each VSA runs an NFS server.  The NFS server takes the active volume on the VSA and exports it as an NFS volume and then presents that NFS volume back to the ESXi server.  VMware states that the underlying storage needs to be in a RAID 10 configuration so half the RAID 10 volume is for the active volume, and the other half for the replica volume.  The idea is basically RAID 10 for the volume, but between cluster nodes, so if you lose the physical node, the volume is still active on the secondary… (I know it is a bit convoluted!)
    • Quick run-down of Hardware and Software ESXi requirements for VSA:
      • 64-bit x86 CPUs @ 2GHz or higher per core
      • Memory
        • 6GB minimum
        • 24 GB recommended
        • 72 GB maximum/tested
      • 4 NIC ports per host
      • 8 hard disks with same capacity per host, no more than 2TB each
      • RAID controller that supports RAID10
      • Must be running ESXi 5.0

 

  • Configure ESXi hosts as VSA hosts
    • Your ESXi hosts need to be what the VSA installer refers to as “greenfield”, which is basically a fresh install of ESXi 5 with no virtual machines and no additional configuration
    • The root password should be the same on all ESXi hosts that are joining the VSA cluster
    • Assign static IPs and VLANs (optional) to your ESXi host(s)
    • Assign a hostname and DNS servers on your ESXi host(s)
    • Install vCenter Server as a physical server or VM
      • If you install as a VM do not install it on an ESXi host that you’re using for the VSA
      • Install the vSphere Client on the vCenter Server
    • If using the GUI to install the VSA
      • Create a new vDatacenter
      • Add hosts to vDatacenter
    • If using the command line to install the VSA, do not create a vDatacenter or add hosts to vCenter – – the automated installation will handle all of that
    • Install the VSA Manager on the vCenter server

For prerequisite information or step-by-step procedures of the preceding items refer to pages 29-34 of the VMware vSphere Storage Appliance Installation and Configuration document

 

  • Configure the storage network for the VSA
    • You need to have at least four physical NICs per ESXi host
    • You need a Gigabit Ethernet switch
      • If you want to use VLANs it must support 802.1q VLAN trunking
    • You need a DHCP server if you plan to use DHCP to obtain the vSphere Feature IP Address automatically
    • Configuring the networks:
      • Ensure the physical ports on your Gig switch are set for 802.1q VLAN trunking if you are using VLANs and that the VLAN IDs you are using aren’t being pruned
      • Using the GUI to install the VSA (two-member cluster without DHCP, requires 11 static IP addresses)
        • Prior to running the VSA install within the VI client you will need to IP both ESXi hosts and the vCenter server
        • Once you have started the install you will be prompted to enter the remaining IP addresses
          • The first two are global IPs, meaning not host-centric:
            • VSA Cluster IP Address
            • VSA Cluster Service IP Address
          • The next four addresses are for VSA1: these are host-centric
            • Management IP Address for VSA1 (front-end)
            • Datastore IP address for VSA1 (front-end)
            • Back- end IP address for VSA1
            • vSphere feature IP address for ESXi host 1 (can be assigned via DHCP)
            • Specify VLAN (optional)
          • The next four addresses are for VSA2: these are host-centric
            • Management IP Address for VSA2 (front-end)
            • Datastore IP address for VSA2 (front-end)
            • Back- end IP address for VSA2
            • vSphere feature IP address for ESXi host 2 (can be assigned via DHCP)
            • Specify VLAN (optional)

 

  • Deploy/Configure the VSA Manager
    • There are a few steps you need to take in order to deploy/configure the VSA Manager.  The first thing you will need to do is install the VSA manager on the vCenter server you are using in your VSA deployment
    • Install VSA Manager
        1. Log on to the vCenter server
        2. Run the VSA Manager install file (as of this posting it is VMware-vsamanager-en-1.0.0-458417.exe)
        3. Choose you language > click OK
        4. Click Next twice
        5. Accept the EULA > click Next
        6. Enter in the IP address and port for the vCenter server that will manage the VSA (should default to the vCenter IP you are on and port 443) > click Next
        7. Enter in license key or leave blank to install in evaluation mode > click Next
        8. Click Install
        9. Click Finish
    • Deploy/Configure the VSA
        1. Log in to vCenter or directly to the host using the VI Client
        2. Navigate to the Host and Clusters view (View > Inventory > Hosts and Clusters
        3. If a vDatacenter doesn’t exist, create one (right-click the vCenter object at the top and click New Datacenter)
        4. Add the two or three hosts you want to use for the VSA to the vDatacenter (remember, these must be greenfield hosts, i.e. no configuration)
        5. Click the vDatacenter on the left and select the VSA Manager tab in the right pane (requires Adobe Flash)
        6. Select Yes to accept the security certificate
        7. Choose New Installation > click Next > click Next
        8. Choose the vDatacenter you want to use > click Next
        9. Choose the hosts you want to use for the VSA by checking the checkbox next to each host (three maximum) > click Next
        10. Enter in IPs for the VSA Cluster IP Address and VSA Cluster Service IP Address
        11. Enter in the IPs for the first host; Management IP Address and Datastore IP Address.  The vSphere Feature IP Address is set to use DHCP, but you can uncheck that and manually enter one
        12. Enter a VLAN ID (optional) for the Front-end network
        13. Enter in the last two octets for the Back-end IP Address
        14. Enter in a VLAN ID (optional) for the Back-end network
        15. Repeat steps 10-14 for each additional host (by default all IPs are auto-filled in a contiguous manner after you enter in the IPs for the VSA Cluster IP and VSA Cluster Service IP)
        16. Click Next
        17. Choose either Format disks on first access or Format disks immediately > click Next
        18. Click Install > click Yes to confirm starting the installation

 

  • Administer VSA storage resources
    • Once the install is complete you get a dashboard look of everything going on in the VSA, here is how you get into it and what items are administrable
    • The hosts that are participating in the VSA cluster are now presented with NFS datastores, two or three depending on the number of hosts part of the VSA cluster
    • The VSA Manager Tab
        1. Log in to vCenter or directly to the host using the VI Client
        2. Navigate to the Host and Clusters view (View > Inventory > Hosts and Clusters
        3. Click the vDatacenter on the left and select the VSA Manager tab in the right pane (requires Adobe Flash)
        4. Select Yes to accept the security certificate
        5. Here you are presented with a dashboard:
        6. Shows the VSA Cluster status
        7. Shows you the VSA Cluster Network
        8. Shows your storage capacity
        9. If you want to put the VSA cluster in maintenance mode click the Enter VSA Cluster Maintenance Mode… hyperlink > click Yes to confirm > click Close once it is complete
        10. To exit cluster maintenance mode click the Exit VSA Cluster Maintenance Mode… hyperlink
        11. You can change the password for the cluster by click the Change Password hyperlink.  Enter in the older username and password and then enter and confirm the new password > click OK
        12. You can reconfigure the network by clicking the Enter Reconfigure Network Mode… hyperlink > click Yes to confirm – THE VSA CLUSTER WILL BECOME UNAVAILABLE WHILE YOU RECONFIGURE THE NETWORK
        13. Enter in your new network settings
        14. Click Next
        15. Click Install > click Yes to confirm
        16. You can export logs by clicking on the Export Logs… hyperlink.  Once it completes click the Download button and choose a place to save the logs
        17. The Datastores view in the lower portion of the pane gives you all the data for each individual datastore
        18. Click the Appliances button to view information about the appliances
        19. Click the Enter Appliance Maintenance Mode… if you want to place an individual VSA into maintenance mode
        20. Manage the VSA datastores the same way you would manage any other NFS datastore, see Objective  3.3 for more details on how to manage/administer an NFS datastore

 

  • Determine use case for deploying the VSA
    • There are a few different use cases for the VSA, with the biggest one I believe to be the SMB 
    • SMBs most likely can’t afford an expensive SAN; even an entry-level SAN/NAS, so even though the VSA license may be a bit expensive (~6K), it is still at least 50% cheaper than an entry-level SAN, and it enables you to utilize existing storage and get all the features in vSphere that require shared storage.   There are some arguments against because of the steep price, but I won’t get into that here
    • If you are limited on power or space and need a bit of shared storage, the VSA may be a good option for you

 

  • Determine appropriate ESXi host resources for the VSA
    • Determining appropriate ESXi host resources for the VSA are, as always, going to depend on the environment, and the requirement.  There will be many of design considerations that you need to take into account:
      • What kind of workloads will you be running
      • How many hosts you need is determined by how much capacity is needed
      • Each host can have a max of 8 hard disks, validate use case with that in mind
      • How many VMs will you store on the VSA
      • Memory over-commitment is NOT supported for VSA ESXi hosts
      • Set a memory reservation equal to the amount allocated to each VSA appliance VM
      • Disable a virtual machine from doing VMX swapping to a VSA datastore
        • Requires an advanced configuration setting
          • Sched.swap.vmxSwapEnabled = True

Tools

Comments 4

  1. Pingback: VCP5 Study Notes - The world of Marc O'Polo - Blog | The world of Marc O'Polo – Blog

    1. Post
      Author
  2. jcoen, Thanks!! I didn’t know that VMware is offering all of these products as trial!! AWESOME!!!

    Thanks again 🙂

Leave a Reply to jcoen Cancel reply

Your email address will not be published. Required fields are marked *

*