VCAP5-DCA Objective 4.1–Implement and Maintain Complex VMware HA Solutions

For this objective I used the following documents:

Objective 4.1 – Implement and Maintain Complex VMware HA Solutions



  • Identify the three admission control policies for HA
    • There are actually three types of admission control mechanisms; host, resource and HA. As you may be aware, HA is the only one of the three that can be disabled. There are several operations within vSphere that will result in resource constraints being applied. Operations such as powering on a virtual machine, migrating a virtual machine or increasing CPU/memory reservations on a virtual machine
    • There are three three types of HA admission control policies:
      • Host Failures Cluster Tolerates
        • Using this policy you would specify a the number of host failures. Meaning, resources are kept available based on the number of hosts you specify in order to ensure resource capacity for failed over virtual machines
        • This is accomplished using a ‘slot size’ mechanism. Slot sizes are logical constructs of memory and CPU and represent a single virtual machine.
        • Slot sizes are calculated based on the largest CPU and memory reservation for a virtual machine. If no reservations are present, the defaults are:
          • 32MHz for CPU
            • this can be changed by modifying the advanced setting das.vmcpuminmhz
          • 0MB + overhead for memory
          • The most restrictive between memory slots and CPU slots will ultimately determine the slot count
        • Lets go through an example:
          • Host 1: 8GB memory, one 2.56GHz CPU
          • Host 2: 8GB memory, one 2.56GHz CPU
          • VM1: 2GB memory reservation, 700Mhz CPU reservation
          • VM2: 3GB memory reservation, 400MHz CPU reservation
          • With the configuration above, The memory slot would be 3GB and the CPU slot would be 700MHz
          • Since these hosts are the same size, the slot size per host is 2.5 for memory and 3 for CPU
          • Since the number of memory slots is the most restrictive, it is used as number of slots per host
          • Total number of cluster slots: 4
          • Used Slots: 2
          • Available Slots: 0
          • Failover Slots: 2
          • Total powerd on vms in cluster: 2
          • Total hosts in cluster: 2
          • Total good hosts in cluster: 2
        • You can view slot information for the cluster using the Advanced Runtime Info
          • Log into the vSphere client > select a cluster
          • Click the Summary tab
          • In the vSphere HA pane click the Advanced Runtime Info hyperlink


      • Percentage of Cluster Resources Reserved
        • This admission control policy implements resource constraints based upon a user-defined percentage of memory and CPU resource
        • Virtual Machine resource requirements:
            • If no CPU reservation exists a default of 32Mhz is used
            • If no memory reservation exists a default of 0MB + overhead is used
          • Calculate failover capacity with the following formula
            • Total Host Resources – Total Resource requirement / Total Host Resources
            • Here’s an example:
              • Total Host CPU Resources: 5000GHz
              • Total CPU resource requirements: 2400GHz
              • 5000 – 2400 = 2600 / 5000 = 52% failover capacity
        • When admissions control is invoked, it will check the current CPU and memory failover capacity. If the operation that invoked admission control will violate the percentages defined for the cluster, then admissions control will not allow the operation to complete. Here are the steps:
          • Total resources currently being used by powered-on virtual machines is calculated
          • Total host resources are calculated (excluding overhead)
          • CPU and memory failover capacity is calculated
          • The percentage of failover capacity for CPU and memory is compared to the user-defined percentages of the cluster
          • Prior to the operation being performed, a calculation is done to determine the new failover capacity if the operation is allowed. If the new failover capacity violates the user-defined percentages (CPU or memory), then the operation is not allowed
        • If you log into the vSphere client and look at the Summary tab for a cluster you can see information related to this admission control policy in the vSphere HA pane


        • Here you can easily see the current CPU failover and memory capacity as well as the user-defined percentages
      • Specify Failover Hosts
        • This is the most straight-forward policy of the three. Using this admission control policy will set aside whatever number of hosts you specify ONLY for failover purposes
        • If you have a 4-node HA cluster using the Specify Failover Hosts and configure it for 1, then whichever host you specify will never be used except in the event of an HA failover
  • Identify heartbeat options and dependencies
    • vSphere HA has two heartbeating mechanisms; network and datastore heartbeating
    • Network Heartbeating
      • Network heartbeating is pretty straight-forward. Slave nodes will send a heartbeat to the master node and the master node will send a heartbeat to each of the slave nodes. The slaves do not send heartbeats to each other, but will communicate during the master node election process
      • Network heartbeats occur every 1 second by default
      • Networking heartbeating is dependent on the management address of the host
    • Datastore Heartbeating
      • Datastore heartbeating was introduced in vSphere 5 and adds another layer of resiliency for HA. Datastore heartbeating also helps in preventing unnecessary restarts of virtual machines
      • When a master node stops receiving network heartbeats it will then use datastore heartbeats to determine if the host is network partitioned, isolated or if it has complete failed
      • The datastore heartbeating mechanism is only used when:
        • The master node loses connectivity to slave nodes
        • Network heartbeating fails
      • HA will select two datastores to use for datastore heartbeating (by default, you can increase this with an advanced setting which is covered later). The criteria used for the datastore selection is:
        • Datastore that is connected to all hosts
          • this is best effort, if there aren’t datastore connected to all hosts it will select a datastore that with the highest number of connected hosts
        • When possible, VMFS datastores are chosen over NFS datastores
        • When possible, the two datastores selected will be on different storage arrays
      • Datastore heartbeating creates a file on the selected datastores for each host (VMFS)) and the file remains in an up-to-date state as long as the host is connected to the datastore. If the host gets disconnected from the datastore, then the file for that host will no longer be up-to-date. (NFS) The host will write to the heartbeat file every 5 seconds
      • If you so desire, you can manually select the datastores to be used for datastore heartbeating
        • Log into the vSphere client > right-click a cluster and select Edit Settings…
        • Under vSphere HA select Datastore Heartbeating
        • Choose the Select only from my preferred datastores radial button
        • Place a checkbox next to at least two datastores you want to use for datastore heartbeating


Skills and Abilities

  • Calculate host failure requirements
    • Earlier I covered how you can manually calculate host failover requirements depending on the admission control policy you’re using, but  I’ll go over it again here
    • Host Failures Cluster Tolerates
      • This uses a logical object called a ‘slot’. Depending on how many virtual machines are powered on, and what resources they are configured with, will determine the amount of slots are required for failover for any given host
      • Once you determine the slot size for CPU and Memory you calculate the total number of slots for the host
        • CPU = Total CPU resources/CPU slot size
        • Mem = Total Mem resources/Mem slot size
      • Here is an example of the slots calculation


      • In the example above the host failover requirement could be up to 8 slots
    • Percentage of Cluster Resources Reserved
      • You can configure separate percentages for CPU and memory.
      • If no CPU or memory reservations exist each VM will use 32MHz and 0 + overhead, respectively
      • Same scenario as before


      • In this example the percentages for CPU and memory are both set to 30%. The current available percentage is 82% for CPU and 81% for memory. Operations such as powering on and migrating virtual machines will not have any issues as the available percentages are well above the user-defined 30%. Assuming other hosts have the same resource configuration you would need 18% CPU and 19% memory free on another host in order for all virtual machines to be successfully failed over
    • Specify failover hosts
      • There isn’t much to calculate here, the specified hosts will stand idle unless a failover occurs
  • Configure customized isolation response settings
    • You can set custom HA isolation responses for each individual virtual machine
      • Log into the vSphere client
      • Right-click on a cluster > click Edit Settings…
      • Under vSphere HA options click Virtual Machine Options
      • Here you can set the cluster default isolation response and the isolation response for individual virtual machines
      • Find the virtual machine you want to modify > choose an option under the Host Isolation Responsecolumn
        • Leave Powered On
        • Power Off
        • Shut Down
        • Use cluster setting


    • There are a multitude of custom HA isolation response settings that you can configure on a HA cluster, These settings are configured at the cluster level, within the vSphere HA > Advanced Options…
      • das.isolationaddress[#] – by default the IP address used to check isolation is the default gateway of the host. You can add more IP addresses for the host to use during an isolation check. A total of 10 addresses can be used (0-9)
      • das.usedefaultisolationaddress – this option is either set to true or false. When set to false a host will NOT use the default gateway as an isolation address. This may be useful when the default gateway of your host is an unpingable address, or a virtual machine, such as a virtual firewall
      • das.isolationShutdownTimeout – use this option to specify the amount of time (in seconds) it will wait for a guest shutdown process that was initiating by invoking the isolation response, before HA will forcefully power off a virtual machine
  • Configure HA redundancy
    • Management Network
      • Since HA uses the management network to send out network heartbeats, it is a good idea and best practice to make your management network redundant. There are two ways that you can accomplish this; use NIC teaming on the vSS or vDS where your management network resides or add an additional vmkernel port on a separate vSS or vDS and enable it for management
      • NIC Teaming
        • Add an additional NIC to the vSS or VDS that hosts the management network
          • Ideally this will be physically connected to a separate switch
        • Set the new NIC as a standby adapter
        • If the active adapter fails, the standby will take over, thus allowing network heartbeats to be transmitted and received
      • Add a new vmkernel port
        • Create a new vmkernel port on an existing or new vSS/vDS that currently is not being used for management
        • Enable the vmkernel port for management
        • Network heartbeats can now be sent/received on this new vSS/vDS which will allow network heartbeats to continue should your primary management network fail
    • Datastore Heartbeat
      • The nature of datastore heartbeating is, by default, redundant. When HA is enabled it will select two datastores to use for datastore heartbeating. VMware states that two datastores are enough for all failure scenarios
      • If you have a need to configure more than two heartbeat datastores per host you can used this advanced setting
        • das.heartbeatDsPerHost – set this to the number heartbeat datastores you want to use
      • If possible, ensure you have two datastores that reside on two separate physical storage arrays
    • Network partitions
      •   A network partition is created when a host or a subset of hosts lose network communication with the master node, but can still communicate with each other. When this happens an election occurs and a one of the hosts is elected as a master
      • The criteria for a network partition is
        • The host(s) cannot communicate with the master node using network heartbeats
        • The host(s) can communicate with the master using datastore heartbeats
        • The host(s) are receiving election traffic
      • I don’t fully understand what network partitions has to do with “Configuring HA for redundancy”, but I do know that network partitions are bad. Why are they bad?
        • vSphere can only connect to one master host, so if you have a subset of hosts in a network partition, they will not receive any configuration changes related to vSphere HA until the network partition is resolved
        • Hosts can only be added to the partitioned segment that communicates with vCenter
        • When using FT, the primary and secondary VMs could end up being on a partition where the host is not responsible for the primary or secondary FT virtual machine. This scenario could prevent the secondary VM from restarting should the primarty VM fail IF the primary VM lived on host that was not responsible for that VM
          • This is possible because a master host that has a lock on a datastore is responsible for all the VMs that live on that datastore. The master host of a network partition that the FT VMs are running on may not be the master that has a lock on that datastore, thereby it is not responsible for it from a HA perspective
      • So I guess the lesson is, configure HA for redundancy in order to avoid network partitions
        • Ensure management network redundancy at the vmkernel layer, the hardware layer (think NICs on a separate bus) and the physical network layer
  • Configure HA related alarms and monitor an HA cluster
    • There are seven default alarms that ship with vCenter related to HA
      • Insufficient vSphere HA failover resources
      • vSphere HA failover in progress
      • Cannot find a vSphere HA master agent
      • vSphere HA host status
      • vSphere HA virtual machine failover failed
      • vSphere HA virtual machine monitoring action
      • vSphere HA virtual machine monitoring error
    • There are plenty of additional alarms that you can create for clusters and virtual machines related to vSphere HA. Here are a list of available triggers for each
      • Clusters


      • Virtual Machines


    • Aside from the vSphere HA alarms you can monitor an HA cluster using the Summary tab of a given cluster. In the vSphere HA pane you can look at the Cluster Status and any Configuration Issuesthat may be related to HA
      • Log into the vSphere client > click a cluster from the inventory > select the Summary tab
      • Click the Cluster Status hyperlink located in the vSphere HA pane
      • There are three tabs in this dialog box
        • Hosts: allows you to see which host is the master and how many hosts are connected to the master


        • VMs: shows you how many VMs are protected/unprotected


        • Heartbeat Datastores: shows you which datastores are being used for datastore heartbeating. Clicking each datastore shows you which hosts are using that particular datastore


      • Click on the Configuration Issues hyperlink
      • Here you can see any configuration issues for vSphere HA


      • As you can see in the example above, there is no management network redundancy for either host that is part of this HA cluster. Remember that having management network redundancy can be key in avoiding network partitions
    • Looking at the summary tab of each host that is part of a HA cluster will show you the vSphere HA Statefor that host
      • Log into the vSphere client > select a host from the inventory > click the Summary tab


      • Clicking on the small dialogue button will give you more information about the HA state


    • When troubleshooting vSphere HA you can look at logs for a host that is giving you trouble. Here are some key logs and their locations
      • fdm.log – /var/log
      • hostd.log – /var/log
  • Create a custom slot size configuration
    • There are two advanced settings that you can configure in order to create a custom slot size; one for CPU and one for memory
      • das.slotCpuInMHz
      • das.slotMemInMB
    • These two advanced settings allow you to specify the maximum slot size in your cluster
      • If a VM has reservations that exceed the maximum slot size then the VM will use multiple slots
    • Customizing the slot size can have an unintended, and adverse effect during failover
      • You have a custom slot size of 1GB of memory. Let’s say that nets you 20 slots for a host. If you have a virtual machine on that host with a 5GB memory reservation then 5 slots need to be available on that host in order for the VM to be powered on. Now, let’s say across your cluster you have 15 free slots, but none of the hosts in the cluster have 5 free slots, then the VM with the 5GB memory reservation will not be able to power-on during a failover
    • To set these advanced settings
      • Log into the vSphere client > right-click a cluster from the inventory > click Edit Settings…
      • Click vSphere HA > click the Advanced Options… button
      • In the option column add a new option das.slotCpuInMHz > specify the maximum CPU slot size in the value column
      • In the option column add a new option das.slotMemInMB > specify the maximum memory slot size in the value column


      • Click OK when finished > click OK again to exit the cluster settings dialog
  • Understand interactions between DRS and HA
    • vSphere DRS and vSphere HA can compliment each other when they are enabled on the same cluster. For example, after a HA failover DRS can help to load balance the cluster. Here are some other interactions
      • If DPM has put hosts in standby mode and HA admission control is disabled, this can cause insufficient resources to be available during a HA failover. When DRS is enabled it can work to bring those hosts out standby mode and allow HA to use them for failover
      • When entering maintenance mode DRS is used to evacuate virtual machines to other hosts. DRS is HA aware and will not migrate a virtual machine to a host, that in doing so, would violate HA admission control rules. When this happens you will have to manually migrate the virtual machine
      • If you are using required DRS VM-HOST affinity rules this may limit the ability to place VMs on certain hosts as HA will not violate required VM-HOST affinity rules
      • If you have a VM that needs to be powered on with enough available resources, but those resources are fragmented, HA will ask DRS to try and defragment those resources in order to allow the VM(s) to be powered on
  • Analyze vSphere environment to determine appropriate HA admission control policy
    • There are multiple factors to be considered when deciding which HA admission control policy should be chosen. Here are some things to consider
      • Availability requirements – across your cluster you need to determine what resources you have available for failover and how limiting you want to be with those available resources
      • Cluster configuration – the size of your hosts, whether the hosts are sized the same or unbalanced with regards to total resources
      • Virtual Machine reservations – if you are using virtual machine reservations you need to look at the largest reservation
      • Frequency of cluster configuration changes – this refers to how often you are adding/removing hosts from your cluster
    • All these things should be considered when choosing the HA admission control policy. Let’s look at the different HA admission control policies and analyze them based on the factors listed above
      • Specify Failover Hosts – This policy is geared towards availability. If you HAVE to have available resources above all other factors to ensure HA failover and have the budget to let hosts stand idle then choose the Specify Failover Hosts admission control policy
        • Geared towards availability
        • Cluster configuration isn’t an issue, specify the proper amount of failover hosts dependent upon your availability requirements
        • Virtual machine reservations don’t matter at this point
        • Frequency of cluster configuration changes do play a small role here. If you are constantly adding new hosts to your cluster there may be a requirement to specify additional failover hosts to meet availability requirements
      • Host Failures Cluster Tolerates– This policy isn’t as cut and dry as Specify Failover hosts. If you are worried about resource fragmentation, meaning you have enough resources spread across the hosts in the cluster, but not enough per host to meet availability requirements during a HA failover, then this policy is for you
        • Meets availability requirements by avoiding the resource fragmentation paradigm
        • Cluster configuration is a serious issue. If you have unbalanced hosts, meaning some hosts have more total resources than others, then this can lead to under utilized hosts. Using this policy the host with the highest amount of slots is NOT included in the slot size calculation, therefore limiting the amount of cluster slots. In other words, the number of powered on virtual machines that can be powered on
        • Virtual machine reservations is another serious issue. If you have some VMs with rather large CPU or memory reservations then the number of slots will be smaller. This leads to a conservative consolidation ratio and again, under utilized hosts
          • You can use advanced settings to limit the size of the CPU and memory slots, but doing so directly undermines resource fragmentation avoidance and may not always meet availability requirements
        • Frequency of cluster configuration changes can be an administrative overhead problem. If you have a 10 host cluster and specify the Host Failures Cluster Tolerates at 3 and then add 10 more hosts, the number of host failures that the cluster will tolerate is still 3. Therefore, if you are constantly adding hosts you will need to change the number of host failures appropriately to meet availability requirements
      • Percentage of Cluster Resources– This policy is meant to be flexible and is the HA admission control policy recommended by VMware for most HA clusters. If you need flexibility and seamless scalability with regards to admission control then this is the policy you’ll want to pick
        • This policy meets availability requirements based on CPU and memory percentages you define as needing to be available
        • Cluster configuration is a non-issue. Regardless of the size of your hosts, balanced or unbalanced, the percentages for CPU and memory that you define will stay the same. You will however need to do a bit more leg work upfront to calculate what percentages to define based on availability requirements. If your hosts are unbalanced it will take more time to do
        • Virtual machine reservations have no effect when using the Percentage of Cluster Resources admission control policy. Again, the user-defined percentages will remain the same regardless of virtual machine reservations
        • The frequency of cluster configuration changes have no impact when using this admission control policy. As you add or remove hosts the total number of cluster resources that need to be available will dynamically change based on resources being added or removed from the cluster
        • The big downside to using this admissions control policy is resource fragmentation. Just because your cluster meets the availability requirements based on the user-defined percentages does not mean that those available resources aren’t fragmented across all the hosts in the cluster. As discussed earlier, if DRS is also enabled and resources are fragmented during a failover event, HA will ask DRS for best effort to try and defragment the cluster in order to facilitate the best outcome of said failover event
    • Again, VMware recommends using the Percentage of Cluster Resources admission control policy for most environments. Should you find this policy does not meet some of your business requirements, evaluate the other two policies based on the factors detailed above to determine the proper course of action
  • Analyze performance metrics to calculate host failure requirements
    • Regardless of the HA admission control policy you choose you need to determine what your host failure requirements are. In order to do this you will need to look at the performance metrics of your virtual machines that will be part of the HA cluster
    • To look at the performance metrics of a virtual machine you can use the vSphere client performance tab to look at advanced metrics, such as CPU and memory utilization, and you can do so over a specified period of time
    • You should look at the virtual machines performance over a period of time to determine the average utilization. You should also look at the hosts performance over a period of time to determine its resource consumption and resource availability
      • Determining the host’s resource availability should give you a better handle on determining your available cluster resources compared to the average virtual machine resource consumption. When you compare those two metrics you can further determine what percentage of resources you need to always keep available in order to satisfy a HA failover. This really adds value when using the Percentage of Cluster Resource admission control policy
    • A big factor that must be considered are the size of your virtual machine reservations. HA will not power on a virtual machine if it violates the admission control policy. HA will also not power on a virtual machine if it can’t meet the reservation. Now, this doesn’t relate directly to performance metrics, I feel it is an important factor to consider when calculating host failure requirements
  • Analyze HA cluster capacity to determine optimum cluster size
    • Trying to right-size a HA cluster can be challenging, especially in a fluid environment. Above all it will come down to availability requirements
      • What VMs do you need available even when a failover occurs
        • What is their resource utilization
      • How many hosts are currently in your cluster
        • Does this meet your availability requirements
        • How does your availability requirements match-up in terms of scaling up within the cluster based on the number of hosts in the cluster. A better way of asking the question; how many more VMs can with my current cluster resources while still maintaining required resource availability
      • What is your current cluster utilization and availability and how does that matchup against availability requirements
      • What admission control policy are you using
    • These are very basic questions, but answering each of them and taking into consideration your calculated host failure requirements should enable you to determine if you have right-sized your cluster, or if configuration changes need to be made to meet availability, and ultimately, business requirements


Comments 14

  1. Hi Josh,

    Awesome material!!!

    I’ve found a mistake on “Host Failures Cluster Tolerates” example calculation. You said that mem slot is 5, but 8GB (one host) / 3GB (max mem slot size) = 2,5 (2 slots). Two mem slots are more restrictive than 3 cpu slots. So…

    Total number of cluster slots: 2
    Total number used: 2
    Total available: 0

    Thank you for your effort.

    1. Post

      Hi Jose,

      Thanks for the comment. After looking back through the example, you are correct, the total number of cluster slots is not 6, but it is also not two. Slots are totaled per cluster, not per host. So while there are only 2.5 slots per host, there are a total of 4 slots per cluster (two per host). When I did the math I did it for the cluster, and not each individual host (15/3 = 5). So theoretically, there are 5 memory slots for the cluster, but that 5th slot is fragmented (0.5 slots per host). So, the slot size will, as you stated, be based on memory (2.5 slots), but the total for the cluster will be 4 slots and not 2. I will update the post.

      So here is what the cluster total should look like:

      Total number of cluster slots: 4
      Total number used: 2
      Total available: 2

      Thanks again for pointing that out to me Jose, please let me know if you run across anything else.


      1. Hi Josh,

        I’ve reviewed one more time the example and I think that the correct calculations are this:

        Total slot in cluster: 4
        Used slots: 2
        Available slots: 0
        Failover slots: 2
        Total powered on vms in cluster: 2
        Total hosts in cluster: 2
        Total good hosts in cluster: 2

        Available slots are 0 because we must take into account that the calculations are in the worst case, when you lost one host. So, you have available 2 failover slots.

        What do you think?

        Awesome his job one more time.

        Best regards,
        José Luis Gómez

        1. Post

          Hi Jose,

          You’re correct. When I put “Available” slots I should have been more specific and broke it down the way it actually appears in vCenter. I meant available in general, in this case they are only available for failover as you stated. I have updated the post to reflect what you would see in vCenter.

          Thanks again for keeping me straight. Glad to see you are getting use out of the content!


          1. What’s one to do in a heterogeneous sraotge environment. It’s not as if you can configure a per-VM isolation response?VM cluster with some NFS sraotge, some iSCSI VMFS datastores, some FC datastores, and some FCoE.Likelihood that host will retain access to VM network: Unlikely (ESXi second management VMkernel on same subnet as one of the das.isolationaddress on 2x10gig interface team on same dvSwitch as VM network)Likelihood that host will retain access to VM datastores: Some VM x datastores Likely (VMs on FC, iSCSI) FC unaffected by Ethernet, iSCSI protected by iSCSI multipathing and additional 1-gigabit connections through a second sraotge-dedicated switch.Some VM x datastores: Unlikely (NFS);NFS datastore on 2x10gig team shared with ESXi management.Because NFS does not support multipathing, there is no path failover possible in case of network issues if the problem cannot be detected by network teaming on the ESXi host.

          2. Post

            Hi Ramy, sorry I missed your comment here. You can actually override the cluster default VM isolation response and set a per-vm isolation response under cluster settings > vSphere HA > Virtual Machine Options.

    2. Same issue here. Everything was fine and then vCenter server carshed for an unknown reason, afterwhich it couldn’t connect to the host. I removed the host and tried readding each time getting this error either with IP or DNS name. All DNS was working correctly and suffixes setup as mentioned but still no joy. I rebuilt the host with the same details but still could not add it back to vCenter. In the end I added it using another management port (different IP) and it’s all fine. I haven’t had chance to look into it further or try the original IP again but perhaps there is some corrupt entry in the database?

      1. Post

        Hi Jacky,

        I’m sorry I missed your comment on here (I would have responded MUCH sooner). Are you still having the problem adding that original IP back? I’d be curious to know the exact errors and maybe look at a few logfiles if you were. Let me know and post some more details and we’ll try to crack that nut!

  2. Hi Josh – one minor correction for the % Cluster Res Rsrvd A/C Policy – the default CPU reservation is 32MHz, not 256MHz (tho it used to be). You have it right in the 1st listed A/C Policy. Thanks for all your hard work! Shane

    1. Post
  3. Hey Josh,

    Great writeup.
    There is correction for the log location though.
    fdm.log and hostd.log are both located in /var/log directory. Infact thats the location where all the logs are stored.

    1. Post
  4. Hi Josh,

    I’m struggling to understand how to calculate the memory total capacity available under resource allocation, which is what HA takes into account for slot sizing.
    I have a host with 3Gb of RAM and only 606Mb of total memory capacity… why ?

    1. Post

Leave a Reply to Giuliano Bertello Cancel reply

Your email address will not be published. Required fields are marked *