“But It’s Got Electrolytes, It’s What Plants Crave…”

“Fibre Channel has what VMware craves.”

In the movie Idiocracy, among other hilarious hijinks, humanity is stuck parroting things that sound authoritative, but really don’t understand anything beyond the superficial statement.

Sound familiar? Like maybe tech?

 

 

Take for example that VMware datastores are best on Fibre Channel SAN arrays.  Now, I’ve caught myself parroting this before. In fact, here is the often parroted pecking order for performance:

  • Fibre Channel/FCoE (Best)
  • iSCSI (Wannabe)
  • NFS (Newb)

NFS has long been considered the slowest of the three options for accessing data stores in VMware, with some VMware administrators deriding it mercilessly. iSCSI has also been considered behind Fibre Channel in terms of performance.

But how true is that actually? What proof are those assertions based on?  Likely it was when Fibre Channel was rocking 4 Gbits while most Ethernet networks were a single Gigabit.  But with the advent of 10 Gbit Ethernet/FCoE and 8 Gbit FC (and even 16 Gbit FC), how true is any of this anymore?

It seems though that the conventional wisdom may be wrong. Netapp and VMware got together and did some tests to see what the performance difference was for the various ways to get access to a data store (FC, FCoE, iSCSI, NFS).

This mirrors some other performance tests by VMware, comparing 1 Gbit NFS and iSCSI to 4 Gbit FC. 4 Gbit FC was faster, but more interesting was that iSCSI and NFS were very close to each other in terms of performance.  Here’s part of the conclusion from VMware 10 Gbit smackdown (FC, FC0E, iSCSI, NFS):

All four storage protocols for shared storage on ESX are shown to be  capable of achieving throughput levels that are only limited by the capabilities of the storage array and the connection between it and  the ESX server…

Another assumption is that jumbo frames (Ethernet frames above 1,500 bytes, typically around 9,000 bytes) improves iSCSI performance.  But here’s a performance test that challenges that assumption. So it didn’t seem to matter much.

In fact, it shows that if you’re having an argument about which transports, it’s typically the wrong argument to have.  The decision on your storage array is far more important.

Another surprise in the recent batch of tests is that iSCSI hardware initiators didn’t seem to add a whole lot of benefit, and neither did jumbo frames.  Both of which were the conventional wisdom (that I myself have parroted before) in terms of iSCSI performance.

I remember the same phenomenon with Sun’s Solaris years ago, when it was nick-named “Slowlaris“.  Initially, when Solaris was released in the 90’s, it was replacing BSD-based SunOS. Like a lot of newer operating systems, it required beefier hardware, and as such the new Solaris tended to run slower than SunOS on the same hardware, hence the derisive name Slowlaris.  This ended in the late 90’s, but the name (and perception) still stuck in the naughts (00’s), despite numerous benchmarks showing Solaris going toe-to-toe with Linux.

In technology, we’re always going to get stuff wrong. Things change too often to be 100% accurate at all times, but we should be open to being wrong when new data is presented, and we should also occasionally test our assumptions.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.