Home > Uncategorized > What’s Best EXT3 or EXT4 For My NFS Datastores?

What’s Best EXT3 or EXT4 For My NFS Datastores?


Here we go again with a blog post in a similar vein as my other blog post which is called Chunk Size Of a RAID0 Volume On a QNAP NAS – What’s The Sweet Spot? where I was thinking of the best chunk size for my software RAID layer.

This time I concentrate my thoughts on a layer above the software RAID, that is the File System layer, with a simple question in mind, what is the best File System for my NFS datatstores in term of IOPS.

With my QNAP TS-459 Pro I have not so many choices. Actually two File System formats are available, the honorable EXT3 and the current EXT4. I invite you to click on both links to read a full description of these two extended File System formats. Wikipedia says EXT3 is slower than EXT4 (or JFSReiserFS and XFS). So any body would say forget about the old and clunky EXT3, and let’s format with EXT4. It’s new, it’s sexy, it’s fast, faster than EXT version 3…

Hey wait a minute, am I going to take that for granted and follow the ‘best practices’ dictated by someone else? As Steve Chambers would say, “Why? And in relation to what” ESX4 should be faster than EXT3 anytime, any environment, any IO pattern, any storage device?

Anything but benchmark tests can tell you this and that’s exactly what I wanted to check out in my VMware home lab which, for your information, consists of a couple of physical servers, a Shuttle SX58J3 and a HP Proliant ML115G5, which are both attached to a QNAP TS-450 Pro storage device through a HP Procurve 1810G-24 switch. If you wish to read more about how I set up the trunks to connect the gears together, read How To Set Up a Trunk Port Between An ESXi4.1 And An HP ProCurve 1810g-24

First a few words about my prerequisite. It is very simple, I’m looking for a File System, either EXT3 or EXT4, which can deliver the highest level of IOPS for my typical IO length which is 4KB.

The typical IO length is the typical size, in KB, of the IO’s issued to the storage from the virtual machines. Put it in another word, that’s the weight in KB of the majority of the IO’s your virtual machines are issuing to the storage.

To get the IO trend I use VMware vSCSIStats. As the VMware document says, this tool collects and reports counters on storage activity. Its data is collected at the virtual SCSI device level in the kernel.  This means that results are reported per VMDK (or RDM) irrespective of the underlying storage protocol.  The following data are reported in histogram form:

  • IO size
  • Seek distance
  • Outstanding IOs
  • Latency (in microseconds)

After the collection of data, I could analyze them and it appears that my typical IO length is 4KB. Actually it’s a bunch of Microsoft 2003 and 2008 servers with a  couple of workstations, XP and W7. If I had collected data from a bunch of Microsoft SQL Server 2008 servers, most probably that my typical IO length would be between 8KB up to 64KB.

Now that I have my typical IO length, it’s time to do some load tests using my IOmeter Configuration File from which I have selected a few of the typical Access Specifications that I will run against different File System formats and different NFS server configuration settings (oplocks).

Basically I have tested five Access Specifications:

  1. 512B; 100% Read; 0% Random (Max Read IOPS)
  2. 512B; 100% Write; 0% Random (Max Write IOPS)
  3. 256KB; 100% Read; 100% Sequential (Backup)
  4. 256KB; 100% Write100% Sequential (Restore)
  5. 4KB; 50% Read; 50% Write; 100% Random

This list of Access Specifications needs some explanations. The first two Access Specifications help me to identify the max IOPS I can get out of the storage device. Then Access Specifications #3 and #4 identify the maximum throughput in MBytes I can get out of the storage and finally the last Access Specifications represent my typical IO pattern in my VMware home lab environment.

A few words about the graphic below. I have five Access Specifications each one with a different color. Then I have two File Systems tested and for each them I have two NFS server configuration setting, that is the NFS server’s OPLOCKS configuration setting is either turned on or off.

OPLOCKS stands for Opportunistic Locking, a File System mechanism to lock files when they are open by a process or a user. VMware manages its own file locking mechanism thus it is most of the time best to disable that feature on the NFS server. This is a ‘best practice’ in my VMware home lab that my tests showed up. The IO performance gain by turning that feature off is awesome!

What the graphic tells us:

  • Results are always best when OPLOCKS is turned off on the NFS server,
  • EXT3 is as fast as EXT4 for IO length of 512 Bytes in Read mode but EXT3 is 5% slower in write mode,
  • Large IO length, that is 256KBytes, for both EXT3 and 4 are close each other,
  • And for 4KBytes IO length, EXT3 beats EXT4 in any case!

Following the results of my tests I decided to format my NFS datastores with EXT3. But don’t take this as granted and always format your NFS datastores with EXT3. That’s not what I wanted to demonstrate here. Any time you have to design your storage, I have one simple best practice for you, validate your storage design with tests!


About these ads
Categories: Uncategorized
  1. Jason
    May 8, 2011 at 19:47 | #1

    Good post, but all I have to say is Google knows best! If a multi-billion dollar company can trust all their data on ext4 thats what i’m sticking with! But hey to each his own.. as they say… However I still have 2 tb in NTFS because i use to use Windows… haven’t got around to switching them over since i converted to Linux…

    - Jason
    6TB NFS (linux), FTP, LAMP. DNLA (minidlna) to WDTV-live

    • deinoscloud
      May 8, 2011 at 22:37 | #2

      Hi Jason and thx for your comment,

      Indeed. Don’t take what I say as granted… Pick up your file system based on your business and functional requirements. In my case I had one single requirement that is max IOPS for a typical 4KB IO pattern period.

      In another post I say that I use a RAID0 for my VMware datastores. Obviously data protection is not a requirement but max IOPS is…

      Again pick up what you need based on your requirements :)

      Cheers,

  2. jason
    May 9, 2011 at 18:51 | #3

    Have you done any NFS on linux? I use it regularly and on occasion some of my folders do not show up on the share inside the mounted drive… i only can see a few of the folders / files… i end up having to remount the drive in order to see all the files… can’t seem to figure out whats causing this.. it happens on all drive types (Ext4 and NTFS) it happens randomly, some days its a okay… sometimes a few times a day its not… -jason

  3. Oz
    November 25, 2011 at 05:09 | #4

    for something more drastic, set your NFS exports to sync and retest!

  4. Ian
    February 6, 2012 at 22:03 | #5

    I have just bought a QNAP TS-412. and during set up asked me EXT3 or EXT4. I have NO IDEA what to use, this nas is for storing HD Video content from 5G to 50G, speed is the name of the game here. I decided on EXT4 because 4 is higer than 3 lol, still no idea !!!

  5. VJ
    December 16, 2012 at 11:57 | #6

    Hi Friends
    Any one implemented stretch cluster with NFS data store?

    Thanks

  1. December 14, 2010 at 12:17 | #1
  2. July 30, 2011 at 22:58 | #2
  3. February 20, 2012 at 22:47 | #3

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s