VPlex Non-Uniform and Uniform Configurations

I had a recent conversation with my friend, Joe (@virtacit), and my ValCo Labs brother, Josh (@joshcoen), about the different access methods for EMC’s VPlex. Now, if you aren’t familiar with VPlex, EMC is doing some amazing things with storage virtualization. VPlex allows you to take one (or many) storage arrays (EMC or otherwise) and present the storage to VPlex, which in turn presents the storage to hosts as virtual volumes. This abstraction leads to some pretty neat storage availability configurations. One such configuration, and the basis for our discussion, was VPlex in a vSphere Stretch Metro Cluster.

There are many resources that discuss vMSC. A few can be found here:



The purpose of this article is to discuss Uniform and Non-Uniform storage access, not recreate the work they have done. After reviewing the documents linked above, I was left scratching my head.

Ok, the way VMware defines it (see VMware link about)

  • Uniform

Uniform Host Access (Cross-Connect) – This deployment involves establishing a front-end SAN across the two sites, so that the hosts at one site can see the storage cluster at the same site as well as the other site.

  • Non-Uniform

Non-uniform Host Access – This type of deployment involves the hosts at either site seeing the storage volumes through the same site storage cluster only.

Now, the way EMC defines it (see EMC link above)

  • Uniform

Uniform access is typically based on active/passive technology where all I/O is serviced by only 50% of the available storage controllers in the same physical location (i.e. 50% of the controllers are passive); therefore, all I/O is sent to or received from the same location where the active controller resides, hence the term “uniform”. Typically this involves “stretching” dual controller active/passive mid-range storage products, but can also be architected by using legacy active/passive replication. In both cases the use of an ISL is typically required so all hosts can access the active storage controllers at the remote location. These two types of uniform access are known as “split cluster” and “replication” uniform access respectively.


Figure 10 from the EMC Article located above.

  • Non-Uniform

I/O can be serviced by any available storage controller (100%) at any given location; therefore I/O can be sent to or received from any storage target location, hence the term “nonuniform”. This is derived from “distributing” multiple active controllers/directors in each location and does not require an ISL (although an ISL can be optionally deployed).


Figure 11 from the EMC Article located above.

To me it appeared that VMware and EMC were describing the technology the same, but that their definitions of Uniform and Non-Uniform were reversed. Not a huge deal, but confusing when trying to explain why there were different.

Moment of Clarity

After reading (and re-reading) Duncan Epping’s blog post on the subject, (http://www.yellow-bricks.com/2012/11/13/vsphere-metro-storage-cluster-uniform-vs-non-uniform/) it started to make sense. The confusion came from VMware’s article mentioning “Uniform (cross-connect)” and it’s reference to VPlex.

Although VPlex can be configured to provide uniform access, where you see the most benefit is from non-uniform access. In fact, using uniform access would remove many of its benefits in a vMSC. Uniform access implies that we have an active/passive site configuration or storage replication technology. With VPlex, however, all controllers are active at both sites, and when configured for non-uniformed (cross-connect), all host paths from a host in site A, to its local VPlex front-end ports, as well as the paths to the remote sites VPlex front-end ports are active.

Non-Uniform (VPlex IO access pattern)

VPlex is ideally configured in (2) modes, non-uniform, and non-uniform (cross-connect).

  • Non-Uniform
    • Hosts at SiteA are zoned ONLY to the SiteA VPlex cluster
    • Hosts at SiteB are zoned ONLY to the SiteB VPlex cluster
    • ALL write activity of a distributed volume (DVol) traverses the VPlex WANCOM
  • Non-Uniform (cross-connect)
    • Host at SiteA are zoned to SiteA AND SiteB VPlex clusters
    • Host at SiteB are zoned to SiteA AND SiteB VPlex clusters
    • Write activity COULD be serviced by either VPlex cluster (See note below)

NOTE: Although technically the write activity in a VPlex non-uniform cross-connected configuration shows all paths as active, VMware/EMC recommended a ‘FIXED’ multipathing policy for these devices to prevent writes to the non-local VPlex cluster. This makes sense if you think about the write process of the host.

Host Write Penalty Using Non-local Path

From the diagram below you can see that if all paths are active, and a write is sent from a host in SiteA to the remote VPlex cluster (non-preferred), the write has to be mirrored back to the local cluster. (2) writes must be sent across the inter-site link. (1) write across the host WAN connectivity, and (1) write across the VPlex WANCom. If writes are only sent to the local (preferred) cluster, only (1) write needs to be sent to the remote cluster.



What about Uniform and VPlex

As I mentioned previously, VPlex is typically configured in a non-uniform configuration. There are two times where Uniform may be seen.

  • Forced Uniform (Failure of WANCom/VPlex Cluster Partition)

During a WANCom partition, certain rules (or witness election) are enacted to determine which site is the authoritative source for write. This is necessary to avoid a split brain condition, and ultimately corruption. In this state, storage is only accessible at the surviving site, and hosts at both sites (assuming they are cross-connected) MUST access storage from a single site. If the hosts are NOT cross-connected, manual intervention may be required depending on the version of VMware ESXi. Check out the failure scenarios table on page 38 or the EMC article above.


Figure 13 from the EMC Article located above.

  • Intentional (Uniform Configuration)

While technically it is possible, and there may be a specific use case, you could create a VPlex virtual volume at a single site, and forgo things like AccessAnywhere to make the volume available at the remote location. In this configuration, hosts at both sites could only access the storage at a single site.

Where is Uniform used?

I am not as familiar with other storage virtualization platforms, but can see the need for this configuration if the arrays at SiteA and SiteB are not truly active-active. In this case, hosts at SiteA, and SiteB would have to be ‘Uniform (cross-connected)’ to access the site’s storage that is read/write, while the other site is read-only until the site’s roles are switched. Check out this link for a better example. (http://www.yellow-bricks.com/2012/11/13/vsphere-metro-storage-cluster-uniform-vs-non-uniform/)


– What happens to the remaining paths when selecting ‘FIXED’ as the multi-pathing policy? The remaining paths are all ‘active’ (just not active I/O). If there is a failure of the preferred path (local VPlex cluster), what stops a path to the remote VPlex cluster from being selected?

Comments 2

  1. If a host is connected via Uniform Access (VMware’s definition), meaning it is cross-connected to both sides of the VPLEX, what happens when the HLU of a distributed volume is different, which could be the case because each side of the VPLEX metro maintains it’s own StorageViews. Would that cause major issues with the host, for example, if some paths had the HLU ID as 5, and that same distributed volume connected to that same host on the other side of the VPLEX had the HLU as 15?

    1. Post

      Hey Bryan,

      That’s a great question. As a best practice we try to keep the HLU consistent across the hosts as you mentioned, but I had to dig to find a technical reason for this. I found a few articles that provide some of the reasoning:

      One example:
      Certain advanced ESXi Server features require that the ESXi Servers have access to shared storage. Shared devices (LUNs) must be presented with same Host LUN ID to each ESXi Host. Failure to present shared VMFS-formatted devices using the same Host LUN ID to each ESXi Server impacts visibility into the VMFS volume. Sharing devices


      It is possible to add a LUN to multiple Storage Groups for ESX, so that some, but not necessarily all, of the LUNs could be shared. If all LUNs are shared, then it may be simpler to have a large storage group for all of the clustered hosts and LUNs. When adding LUNs to multiple Storage Groups, the best practice is to make sure the same HLU (Host Logical Unit), number is always used for the same Array LUN.

      Not matching the HLU correctly can result in ESX misinterpreting the mismatched HLU as another copy of the LUN, such as a Snap or Clone. This in turn could lead to SCSI reservation conflicts and data unavailability.

      There are exceptions, such as Storage Groups for RecoverPoint, because the LUNs in these groups are not mounted directly on ESX and would not cause SCSI reservation issues. Therefore, any RecoverPoint Storage Groups should be ignored when looking for HLU conflicts.

      I believe the storage view with the VPLEX DVV should be consistent between sites to prevent these issues from being exposed to the ESXi hosts. It has been a while since I messed around in the VPLEX console, so I don’t recall if there is anything preventing you from matching the HLU. Of course, if it is already different, there is a process to correct it. Or even svMotioning resources to a correctly provisioned DVV. Thoughts?

      Thanks for the comment!

Leave a Reply

Your email address will not be published. Required fields are marked *