Yo Dawg, I Heard You Like Hypervisors…

Yo dawg, I heard you like hypervisors, so I put a hypervisor in your hypervisor.

I’ve been testing out XenServer, it’s actually quite nifty. Unfortunately, I can’t run reguar VMs while running in ESXi, as it doesn’t pass on the VT-d extensions, so only paravirtualized OSes will run.

Free ESXi! Now With 8 GB Limit!

Update 9/8/11: Check out my guide for building/buying an inexpensive ESXi host

Update 8/4/11: Woohoo!  They’ve up’d it to 32 GB of vRAM allocation (and max 32 GB of physical RAM too).

Update 7/18/11: Yes, it’s true. Here is it in writing (last FAQ entry).

Wow, this keeps getting worse and worse.

Many techies such as myself have been running ESXi 4.1 in our home labs. The free license runs full blown ESXi with some limits, such as no vmotion, no HA, no vCenter integration, etc. It’s fine for a lab environment, and perfect for a home lab where I need to test a lot of systems.

I was about to publish an article called “So You Want To Build an ESXi System?” on how to build a cheap home system. (Hint: you could build a cheap home system for about $1,000 that had 24 GB of RAM and 4 cores.)

But now, there are reports that VMware has made the free version of vSphere 5 Hypervisor (essentially it’s ESXi 5.0 renamed) absolutely useless by limiting the total amount of RAM to 8 GB per host. I’ll say that again: You can only use total of 8 GB of RAM for all your VMs on a host, no matter how much RAM it has.

8 GB? Seriously?

The licensing change for the paid version was bad enough. We used to be able to increase RAM on our licensed ESX/ESXi hosts free to increase utilization.  The same went with the free version of EXSi. Below is a screenshot of my current ESXi license. I get up to 6 cores on one CPU (I have a quad-core Intel Core i7) and up to 256 GB of RAM (my host has 18 GB).

My current ESXi 4.1 free license: 256 GB limit and no more than 6 cores per CPU

Sure, 256 GB of RAM is a bit much. I’d accept a lot lower than that for a limit, I think even 32 GB would be reasonable (although it wouldn’t be reasonable for more than a few years).  I’d even accept the current core limit (or maybe bump it up to no more than 8?)  But 8 GB? That’s a useless home lab.

Look, VMware, I know you’re in it to make a buck. I don’t fault you for that. I like money too. I too have fantasies of swimming through a vault of gold like Scrouge McDuck.

Our mutual shared goal

If this  is true it’s a huge blow the VMware community, and given that the vRAM allocation for the lowest paid license is 24 GB, the 8 GB limit for free seems about right.

I’ve been a passionate advocate of VMware, I’m VCP4 certified, and I teach courses that involve virtualization. The only VM vendor out of my mouth for the most part has been VMware. You’ve treated the community well in the past, and we the nerd class and IT managers have rewarded you handsomely with server virtualization market dominance. A fantastic ecosystem has cropped up, and you could make a good living within the VMware world. The licensing moves challenge this. All that built-up good will? It’s quickly fading fast.

“But It’s Got Electrolytes, It’s What Plants Crave…”

“Fibre Channel has what VMware craves.”

In the movie Idiocracy, among other hilarious hijinks, humanity is stuck parroting things that sound authoritative, but really don’t understand anything beyond the superficial statement.

Sound familiar? Like maybe tech?

 

 

Take for example that VMware datastores are best on Fibre Channel SAN arrays.  Now, I’ve caught myself parroting this before. In fact, here is the often parroted pecking order for performance:

  • Fibre Channel/FCoE (Best)
  • iSCSI (Wannabe)
  • NFS (Newb)

NFS has long been considered the slowest of the three options for accessing data stores in VMware, with some VMware administrators deriding it mercilessly. iSCSI has also been considered behind Fibre Channel in terms of performance.

But how true is that actually? What proof are those assertions based on?  Likely it was when Fibre Channel was rocking 4 Gbits while most Ethernet networks were a single Gigabit.  But with the advent of 10 Gbit Ethernet/FCoE and 8 Gbit FC (and even 16 Gbit FC), how true is any of this anymore?

It seems though that the conventional wisdom may be wrong. Netapp and VMware got together and did some tests to see what the performance difference was for the various ways to get access to a data store (FC, FCoE, iSCSI, NFS).

This mirrors some other performance tests by VMware, comparing 1 Gbit NFS and iSCSI to 4 Gbit FC. 4 Gbit FC was faster, but more interesting was that iSCSI and NFS were very close to each other in terms of performance.  Here’s part of the conclusion from VMware 10 Gbit smackdown (FC, FC0E, iSCSI, NFS):

All four storage protocols for shared storage on ESX are shown to be  capable of achieving throughput levels that are only limited by the capabilities of the storage array and the connection between it and  the ESX server…

Another assumption is that jumbo frames (Ethernet frames above 1,500 bytes, typically around 9,000 bytes) improves iSCSI performance.  But here’s a performance test that challenges that assumption. So it didn’t seem to matter much.

In fact, it shows that if you’re having an argument about which transports, it’s typically the wrong argument to have.  The decision on your storage array is far more important.

Another surprise in the recent batch of tests is that iSCSI hardware initiators didn’t seem to add a whole lot of benefit, and neither did jumbo frames.  Both of which were the conventional wisdom (that I myself have parroted before) in terms of iSCSI performance.

I remember the same phenomenon with Sun’s Solaris years ago, when it was nick-named “Slowlaris“.  Initially, when Solaris was released in the 90’s, it was replacing BSD-based SunOS. Like a lot of newer operating systems, it required beefier hardware, and as such the new Solaris tended to run slower than SunOS on the same hardware, hence the derisive name Slowlaris.  This ended in the late 90’s, but the name (and perception) still stuck in the naughts (00’s), despite numerous benchmarks showing Solaris going toe-to-toe with Linux.

In technology, we’re always going to get stuff wrong. Things change too often to be 100% accurate at all times, but we should be open to being wrong when new data is presented, and we should also occasionally test our assumptions.