PernixData Flash Virtualization Platform (FVP): The best idea you never had

PernixData, who recently came out of stealth mode, has introduced a product that will change the way you think about SSDs and server-side caching. PernixData was founded by Poojan Kumar (Oracle Exadata) and Satyam Vaghani (Mr. VMFS!), so lets just say PernixData knows a thing or two about storage.

What is Flash Virtualization Platform (FVP)?

FVP promises to bring you, and the datacenter, write-through and write-back caching on a PER VIRTUAL MACHINE basis utilizing server-side PCIe SSD cards or standard SSD drives, while (now this is key), allowing you all the benefits of virtualization that you enjoy today. Now you might ask (assuming you know my name) “Josh, what about vMotion?”, YES, it even supports vMotion. FVP is all software, inserted into the ESXi vmkernel for seamless integration with ESXi and the current data plane, along with integration into the vSphere client for management.

FVP works with any SSD drive, PCIe card, server and storage array listed on VMware’s HCL. FVP works with any application.

How does FVP Work?

From the PernixData homepage, here is an illustration of the FVP concept

Product diagram

For those unaware the difference of write-through and write-back cache, it can be defined simply as:

  • Write-through mode functions as read cache only
  • Write-back mode functions as read and write cache

FVP, once installed, essentially becomes part of the hypervisor. As a virtual machine writes data to its hard disk, data is sent through FVP (presumably at the direction of the vmkernel) where it is either, cached for future access and passed to the device to perform the write (write-through), or, it is written to FVP and FVP sends the write acknowledgment to the host, and the data is flushed to disk at a later time (write-back).

As I stated earlier, FVP can be set per-virtual machine, on the fly. FVP can also be set on a per-datastore basis, so that any virtual machine that resides on that datastore will reap the benefits provided by FVP.

Write-back mode

Write-back mode has something called flash replicas that are SSD disks that sit on remote hosts, let’s call them remote devices. Writes that are cached on the flash device of the local host are then synchronously replicated to the remote devices, and then the write is acknowledged. In order to use write-back mode ALL hosts within the cluster must have PernixData FVP installed

Support for vMotion

Any company creating a server-side caching platform MUST support VMware vMotion out of the gate, or may not make it much past the start line. PernixData FVP does just that, so let me tell you how it works. When a virtual machine enabled with FVP is migrated to another ESXi host, whatever is in cache (FVP cache) at the time remains on the host. If the virtual machine needs to access data held within that cache, it utilizes the vMotion network to retrieve  it. Once retrieved, it stores the data it just fetched from the remote host, on the local host. Eventually all data resides on the local host which negates the need to pull from remote hosts.

Current Support

  • Only supports VMware vSphere — this should be expanding to other platforms in the future
  • Only supports block based storage
  • Ships with Windows PowerShell support
  • Heterogeneous flash devices and sizes across hosts within a cluster is supported
  • All VMware vSphere features
  • Installs via VMware Update Manager (via offline bundle)

Some unanswered questions…

In watching the PernixData SFD3 presentation (link below) by CTO and co-founder, Satyam Vaghani, I had some questions that weren’t addressed so I wanted to post them here:

  1. When using write-back mode, what happens if you have a power outage and the UPS fails (or doesn’t exist); what happens to data that hasn’t been written to disk yet?
  2. (I’m sure I know the answer to this one) Any support for RDMs? I know currently, in either compatibility mode, SCSI reads and writes bypass the vmkernel, but if someone could change that I imagine it’d be Mr. VMFS!
  3. Are they any plans to enable FVP on a per-VM basis dynamically? — maybe based on trending data or I/O demand/queuing?

Final Thoughts

PernixData FVP is, at it’s core, an ingenious and brilliant scale-out model that just makes sense. It allows the application and storage to become intimate like never before, all while providing low latency and high IOPs, WITHOUT sacrificing features. As you scale out your compute, scale out FVP with some SSDs. I’m positive that FVP will become deeply integrated into any VMware design as it matures and comes out of beta (yes, it’s still in beta!). If you want to join the PernixData FVP beta, send an email to I hope to get on the beta so I can take this for a test-drive and see what it can do, but if it can do even half of what it says, then it’s the best piece of software developed in the last 5 years.

Satyam Vaghani’s presentation at SFD3 –

Comments 1

  1. Pingback: Changing of the Guard: PernixData goes GA with FVP 1.0 » ValCo Labs

Leave a Reply

Your email address will not be published. Required fields are marked *