Changing of the Guard: PernixData goes GA with FVP 1.0


On August 6th, 2013 PernixData announced general availability of their Flash Virtualization Platform (FVP), version 1.0. For those of you who aren’t familiar with FVP, you can check out my post from May of this year, “PernixData Flash Virtualization Platform (FVP): The best idea you never had” Prior to the release I had the pleasure of being on a call with Satyam Vaghani, CTO and co-founder of PernixData, and Jeff Aaron, VP of Marketing for PernixData and they were happy enough to share some details on the GA release.


Pricing and Support:

All of the pricing details have yet to be released, but here’s what I know:

  • $7500 per physical server (Enterprise pricing)
  • SMB pricing will be offered without compromising the rich feature set that FVP offers, however specific pricing has yet to be released
  • Service Provider pricing will also be available, however I don’t have details
  • Support will be offered in Platinum and Gold flavors
    • Platinum support will be 24x7x365
    • I don’t have details on Gold support, but I imagine it will be 8×5 support of some sort

I like that the enterprise pricing model is per server and not per VM or per socket. Per VM or per socket pricing would give businesses yet another cost to think about when deploying new workloads or buying larger physical hosts. Per physical host keeps things simple. No need to worry about number of sockets, amount of RAM or other silliness.

I believe this is the beginning of a systemic change in how we architect virtualized environments, a changing of the guard if you will. As the product matures I’m sure more hypervisors will be included, as well as more features. While FVP easily scales as you add new hosts, I’ve wondered how will PernixData address scaling when you need more IOPs, but not necessarily more compute? Good ol’ RAID comes to mind. Throw three SSDs in a host and configure in RAID5; BOOM, more IOPs. I don’t mean using RAID at the controller level. I’m implying that FVP figure out how to generate more IOPs by using multiple SSDs in a host, whether that be via a RAID-esque technique or something completely different. I’ve had some people ask me (my mother) if FVP is really making that much of a difference, well, I’m developing a proper test plan and will be going through the product and will publish my results when complete. TANGENT: My mother thinks VMware is a type of Tupperware  From the little bit that I’ve played around with FVP, it delivers on its promise.

UPDATE: Frank Denneman from PernixData wrote a blog post addressing RAID, which you can find here. Bottom line, don’t use RAID, and look for a follow-up post from Frank on how FVP uses multiple SSDs

Comments 5

  1. Josh,

    In regards to scaling, some testing I have done with the beta and RC versions showed that RAID is not required. I believe that the documentation also recommends against it. Just add another SSD, mark it as usable for FVP, and the software handles the rest. If a disk fails (or is removed) FVP keeps working (I tested this scenario).

  2. Thanks for the comment Peter.

    You’re right, the docs to recommend that direct access be given to the SSD, and not to be configured in a RAID configuration. I wasn’t implying that RAID be set on the hardware controller. I meant that FVP could take advantage of multiple SSDs (internally using its own code) in a host with the use of something like RAID in order to obtain more IOPs.


  3. Pingback: Обзор блогов от 12.08.13 |

  4. Hi Josh,

    Nice article. One thing to think about. Please do not use any type of RAID. It will not offer any scalability benefits. FVP is designed to work with multiple devices but will not use multiple disks for a single VM. Im preparing a post on that particular topic, so stay tuned. Another reason is to reduce the amount of random IO. Please read this article:

  5. Pingback: Design considerations for the host local FVP architecture -

Leave a Reply to Frank Denneman Cancel reply

Your email address will not be published. Required fields are marked *