VMware over NFS Myths


Scott Lowe says:

  • Myth #1: All VMDKs are thin provisioned by default with NFS, and that saves significant amounts of storage space. That’s true—to a certain point. What I pointed out back in March of 2008, though, was that these VMDKs are only thin provisioned at the beginning. What does that mean? Perform a Storage VMotion operation to move those VMDKs from one NFS datastore to a different NFS datastore, and the VMDK will inflate to become a thick provisioned file. Clone another VM from the VM with the thin provisioned disks, and you’ll find that the cloned VM has thick VMDKs. That’s right—the only way to get those thin provisioned VMDKs is to create all your VMs from scratch. Is that what you really want to do?
  • Note: VMware vSphere now supports thin provisioned VMDKs on all storage platforms, and corrects the issues with thin provisioned VMDKs inflating due to a Storage VMotion or cloning operation, so this point is somewhat dated.

     

  • Myth #2: NFS uses Ethernet as the transport, so I can just add more network connections to scale the bandwidth. Well, not exactly. Yes, it is possible to add Ethernet links and get more bandwidth. However, you’ll have to deal with a whole list of issues: link aggregation/802.3ad, physical switch redundancy (which is further complicated when you want to use link aggregation/802.3ad), multiple IP addresses on the NFS server(s), multiple VMkernel ports on the VMware ESX servers, and multiple IP subnets. Let’s just say that scaling NFS bandwidth with VMware ESX isn’t as straightforward as it may seem. This article I wrote back in July of 2008 may help shed some light on the particulars that are involved when it comes to ESX and NIC utilization.
  •  

  • Myth #3: Performance over NFS is better than Fibre Channel or iSCSI. Based on this technical report by NetApp—no doubt one of the biggest proponents of NFS for VMware storage—NFS performance trails Fibre Channel, although by less than 10%. So, performance is comparable in almost all cases, and the difference is small enough not to be noticeable. The numbers do not, however, indicate that NFS is better than Fibre Channel. You can read my views on this storage protocol comparison at my site. By the way, also check the comments; you’ll see that the results in the technical report were independently verified by VMware as well. Based on this information, someone could certainly say that NFS performance is perfectly reasonable, but one could not say that NFS performance is better than Fibre Channel.
  •  

     

    In September 2007 Nick Triantos, an experienced storage engineer, says:

    • Close to 90% of the VI3 environments today are deployed over FC,
    • And of that %, based on experience, I’d say that 90% are using VMFS, VMware’s clustered filesystem
    • The complexity starts to increase exponentially as the number of servers in a VMware Datacenter start to multiply.
    • How’s my performance is going to be with 8-10 VMs on a VMFS LUN and a single Disk I/O queue?
    • What if I take the RDM route and later one i run out of LUNs?

    Based on that Nick comes up with the idea that NFS is not that bad after all, let see why!

    • Provisioning is a breeze
    • You get the advantage of VMDK thin Provisioning since it’s the default setting over NFS <- only with vSphere 4
    • You can expand/decrease the NFS volume on the fly and realize the effect of the operation on the ESX server with the click of the datastore “refresh” button.
    • You don’t have to deal with VMFS or RDMs so you have no dilemma here
    • No single disk I/O queue, so your performance is strictly dependent upon the size of the pipe and the disk array.
    • You don’t have to deal with FC switches, zones, HBAs, and identical LUN IDs across ESX servers
    • You can restore (at least with NetApp you can), multiple VMs, individual VMs, or files within VMs.
    • You can instantaneously clone (NetApp Flexclone), a single VM, or multiple VMs
    • You can also backup whole VMs, or files within VMs

    Also Nick says that NFS is just faster than FC!  Can you believe that?

    • ESX server I/O is small block and extremely random which means that bandwidth matters little. IOs and response time matter a lot.
    • You are not dealing with VMFS and a single managed disk I/O queue.
    • You can have a Single mount point across multiple IP addesses
    • You can use link aggregation IEEE 802.3ad (NetApp multimode VIF with IP aliases)

    And he finishes with:

    • if you consider the fact that on average a VMFS volume is around 70-80% utilized (actually that maybe high) and the VMKD is around 70% you can easily conclude that your storage utilization is anywhere around from 49-56% excluding RAID overhead, then NFS starts to make a LOT of sense.

     

    Please follow up and take the poll

     

     

    Sources: Scottlowe.org and Storagefoo.blogpost.com

    Advertisements

    About PiroNet

    Didier Pironet is an independent blogger and freelancer with +15 years of IT industry experience. Didier is also a former VMware inc. employee where he specialised in Datacenter and Cloud Infrastructure products as well as Infrastructure, Operations and IT Business Management products. Didier is passionate about technologies and he is found to be a creative and a visionary thinker, expressing with passion and excitement, hopefully inspiring and enrolling people to innovation and change.
    This entry was posted in Uncategorized. Bookmark the permalink.

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s