Disks not available for use in VSAN configuration

Since VMware opened up the beta for VSAN I have wanted to take it for a spin in the lab but just haven’t had the time. Well, finally I had a little bit of time. My three lab hosts had ESXi installed on the spinning disks and I had no SSDs installed. To get started I installed ESXi on USB sticks and installed some SSDs. Good to get started, right?

I logged into the web client and enabled VSAN in manual mode, but when I went to to configure the disk groups two of the hosts were only showing 0 of 1 disks in use instead of 0 of 2 disks

image

Taking a deeper look into the two hosts that were only showing one disk I discovered that labs-vmhost02 and labs-vmhost03 both weren’t showing the spinning disk as an available disk, only the SSDs were showing as available. The reason why only the SSDs were showing available is for two reasons:

  1. When ESXi was installed on the flash drive the spinning disk was used for the scratch location
  2. The spinning disk is hosting the diagnostic/coredump partition

In order to utilize the spinning disk for VSAN the partitions need to be moved or removed. Unfortunately when you try to remove these partitions using partedUtil because they are actively being used. The following error occurs:

Error: Read-only file system during write on /dev/disks/t10.ATA_____TOSHIBA_MK3252GSX__________________________________4847P0FUT
Unable to delete partition 2 from device /dev/disks/t10.ATA_____TOSHIBA_MK3252GSX__________________________________4847P0FUT

For the purposes of this post we’ll be moving the scratch partition and unconfigure the coredump partition. You can also change the location of the coredump partition to a different device and partition number. Doing this will allow us to delete the partitions using partedUtil

Before we can start changing things we need to identify the disk and partition numbers. First lets get the device ID of the disk we’re trying to use for VSAN:

  1. SSH into the host and run the following command
Now, find the disk in question (I knew which disk to look for because it was the only Toshiba disk in the system) and copy the device ID; this is the first line

image

The device ID we’ll be using is:   t10.ATA_____TOSHIBA_MK3252GSX__________________________________4847P0FUT

Now we need to find any partitions associated with this device. While still SSH’d into the host, run the following command:

The results will show you what, if any, partitions are located on the device

image

Here you can see two partitions live on this disk; partition 9 and 2. Partition 9 is the vmkDiagnostic partition. Partition 2 is linuxNative, and this is the current configured scratch location

Changing the Scratch Location

This can be done in a few different ways, from the GUI and from the command line. In this example we’ll be changing the scratch location to a network location (an iSCSI attached VMFS datastore). You’ll need to know the VMFS UUID of the location you want to change it to. Run the following commands to get the new location for scratch and then change its location:

Note: If using a network location, ensure you created a folder in that network location specific to that host, such as .locker-hostname

Before you’ll be able to delete the partition where the scratch location was configured you will need to reboot the host

Unconfigure the Coredump Partition

Normally you don’t want to unconfigure the coredump partition as it’s used to store dumps and screenshots if the system has a PSOD, but since this is a lab environment I decided to just unconfigure it. To unconfigure the coredump partition run the following command:

Delete the partitions from the device (ensure you’ve rebooted the host prior)

Now you should be able to delete partitions 9 and 2 by running the following commands:

Once you’ve done this on all hosts affected you will be able to both the SSDs and spinning disks

image

Comments 3

  1. Pingback: Newsletter: January 12, 2014 | Notes from MWhite

  2. Had same issue, but my issue was caused by having previously installed ESXi 5.5 U1 on the PCI-e SSD card first, just for testing purposes. After my brief test with ESXi installed directly on the PCI-e SSD card I later installed a clean copy of ESXi 5.5 U1 to the internal USB stick plugged into the motherboard.

    Everything was fine, UNTIL I tried to add the local disks on this particular host to the VSAN Cluster. It ultimately failed because ESXi decided to make the VMKernal Diagnostic partition active on the SSD and not use the Diagnostic partition that was on the USB stick. I guess it creates it regardless during the installation, but later decides to use the existing Diagnostic partition that it must have found on the SSD card.

    Note that changing the Scratch location to “/tmp” did not resolve this issue, although that did have to be done too, because ESXi defaults to using a scratch location on anything available over the USB stick.

Leave a Reply to Robert Rizzi Cancel reply

Your email address will not be published. Required fields are marked *

*