You Changed, VMware. You Used To Be Cool.

So by now we’ve all heard about VMware’s licensing debacle. The two biggest changes where that VMware would be charging for RAM utilization (vRAM) as well as per CPU, and the free version of ESXi would go from a 256 GB limit to an 8 GB limit.

There was a bit of pushback from customers. A lot, actually. A new term has entered the data center lexicon: vTax, a take on the new vRAM term. VMware sales reps were inundated with angry customers. VMware’s own message boards were swarmed with angry posts. Dogs and cats living together, mass hysteria. And I think my stance on the issue has been, well, unambiguous.

It’s tough to imagine that VMware anticipated that bad of a response. I’m sure they thought there would be some grumblings, but we haven’t seen a product launch go this badly since New Coke.

VMware is in good company

VMware introduced the concept of vRAM, essentially making customers pay for something that had been free. The only bone they threw at customers was the removal of the core limit on processors (previously 6 or 12 cores per CPU depending on the license). However, it’s not more cores most customers need, it’s more RAM.  Physical RAM is usually the limiting factor for increasing VM density on a single host, not lack of cores. Higher VM density means lower power, lower cooling, and less hosts.

VMware knew this, so they hit their customers where it counts. The only way to describe this, no matter how VMware tries to spin it, is a price increase.

VMware argued that the vast majority of customers would be unaffected by the new licensing scheme, that they would pay the same with vSphere 5 as they did with vSphere 4. While that might have been true, IT doesn’t think about right now, they think about the next hardware they’re going to order, and that next hardware is dripping with RAM. Blade systems from Cisco and HP have to ability to cram 512 GB of RAM in them. Monster stand-alone servers can take even more, like Oracle’s Sun Fire X4800 can hold up to 2 TB of RAM. And it’s not like servers are going to get less RAM over time. Customers saw their VMware licensing costs increasing with Moore’s Law. We’re supposed to pay less with Moore’s Law, and VMware figured out a way to make us pay more.

So people were mad. VMware decided to back track a bit, and announced adjusted vRAM allotments and rules. So what’s changed?

You still pay for vRAM allotments, but the allotments have been increased across the board. Even the free version of the hypervisor got ups to 32 GB from 8 GB.

Also, the vRAM usage would be based on a yearly average, so spikes in vRAM utilization wouldn’t increase costs so long as they were relatively short lived. vRAM utilization is now capped at 96 GB, so no matter how large a single VM is, it will only count as 96 GB of vRAM used.

Even with the new vRAM allotments, it’s still a price increase

The adjustments to vRAM allotments have helped quell the revolt, and the discussion is starting to turn from vTax to the new features in vSphere 5. The 32 GB limit for the free version of ESXi also made a lot more sense. 8 GB was almost useless (my own ESXi host has 18 GB of RAM), and given what it costs to power and cool even a basement lab, not even worth powering up. 32 GB means a nice beefy lab system for the next 2 years or thereabouts.

What’s Still Wrong

While the updated vRAM licensing has alleviated the immediate crisis, there is some damage done, some of it permanent.

The updated vRAM allotments make more sense for today, and give some room for growth in the future, but it still has a limited shelf life. As servers get more and more RAM over the next several years, vTax will automatically increase. VMware is still tying their liscensing to a deprecating asset in RAM.

That was part of what got people so peeved about the vRAM model. Even if they ran the numbers and it turned out they didn’t need additional licensing from vSphere 4 right now, most organizations had an eye on their hardware refresh cycle, because servers get more and more RAM with each refresh cycle.

VMware is going to have to continue to up vRAM allotments on a regular basis. I find it uncomfortable to know that my licensing costs could increase as exponentially as RAM becomes exponentially more plentiful as time goes on. I don’t doubt that they will increase allotments, but we have no idea when (and honestly, even if) they will.

You Used to be Cool, VMware

The relationship between VMware and its customers base has also been damaged. VMware had incredible goodwill from customers as a vendor, a relationship that was the envy of the IT industry. We had their back, and we felt they had our back. No longer. Customers and the VMware community will now look at VMware with a somewhat wary eye. Not as wary with the vRAM adjustments, but wary still.

I have to imagine that Citrix and Microsoft have sent VMware’s executives flowers with messages like “thank you so much” and “I’m going to name my next child after you”. I’m hearing anecdotal evidence that interest in HyperV and Citrix Xen has skyrocketed, even with the vRAM adjustments. In the Cisco UCS classes I teach, virtually everyone has taken more than just a casual look at Citrix and HyperV.

Ironically Citrix and Microsoft have struggled to break the stranglehold that VMware had on server virtualization (over 80% market share). They’ve tried several approaches, and haven’t been successful. It’s somewhat amusing that it’s a move that VMware made that seems to be loosening the grip. And remember, part of that grip was the loyal user community.

Thank about how significant that is. Before vSphere 5, there was no reason for most organizations to get “wandering eye”. VMware has the most features and a huge install base, plus plenty of resources and expertise around to have made VMware the obvious choice for most . The pricing was reasonable, so there wasn’t a real need to look elsewhere.

Certainly the landscape for both VMware and the other vendors has changed substantially. It will be interesting to see how it all plays out. My impression is that VMware will still make money, but they will lose market share, even if they stay #1 in terms of revenue (since they’re charging more, after all).

Free ESXi 5 (vSphere 5 Hypervisor) Now With 32 GB

Update: Check out my buyers/builder’s guide for an inexpensive ESXi host.

Yeah, we all know about the licensing debacle with VMware, and how it dominated the conversation of the vSphere 5 release. A side issue got a bit less attention, however. I’d written previously about the new licensing model screwing home/lab users of the free ESXi hypervisor with the new licensing model by allocating a paltry 8 GB of vRAM. The free version is limited in functionality (which I’m fine with), but the RAM limit used to be 256 GB. When they announced vSphere 5, the free version was then restricted to only 8 GB of vRAM. Dick move.

However, it seems the huge backlash has changed their minds, and they’ve increased the vRAM allocation for the free version of ESXi 5.0 to 32 GB of RAM.

RAM everything around me (get the DIMMs, y’all)

The FAQ still shows the free version with only 8 GB of vRAM, but the updated pricing PDF confirms that it’s actually 32 GB of vRAM allocation, as well as the announcement on VMware’s blog. From the pricing PDF:

Users can remotely manage individual vSphere Hypervisor hosts using the vSphere Client. vSphere Hypervisor is entitled to 32GB of vRAM per server (regardless of the number of processors) and can be utilized on servers with up to 32GB of physical RAM.

I think right now 32 GB for the free version is reasonable, although I think it’s only going to be reasonable for the next 18-24 months.  After that, they’ll need to increase it, and increase it on a regular basis if they continue with vRAM licensing. RAM is a depreciating asset, after all.

This should go a ways to repair a fairly damaged relationship with the VMware advocates such as myself and the VMware user community in general who relied on the free version of ESXi for our home and lab setups. The relationship isn’t quite where it was, to be sure, but it’s improving.

Also, yes that’s me in the picture. That I just took. For this entry. Because that’s how I roll.

VMware? Surprised. Oracle? Not Surprised.

There’s a rumor that VMware is backing away a bit from their vRAM (vtax) licensing. Nothing confirmed yet, but according to, they’re not getting rid of vRAM licensing entirely, but are making some adjustments. They’re upping the vRAM allotments, and 96 GB is the maximum a VM can count towards vRAM allocation, If you had a VM with 256 GB of RAM, only 96 of it would count towards vRAM usage. I’d still like to see vRAM go away entirely, but it seems the new limits are less catastrophic in terms of future growth.

On a related note, it seems Oracle is one-upping VMware in the “dick move” department. A blog post on points out Oracle’s new Java 7 licensing: In virtualized environments, Java 7 only officially supported by an Oracle hypervisor. Knowing Larry Ellison, it’s not all the surprising. He’s a lock’em in kinda guy. I avoid Oracle at all costs.

VMware’s dick move was a surprise. That didn’t seem like the VMware we’d been working with for years. Oracle on the other hand, you’re only surprised they didn’t do it sooner.


Yes, VMware’s License Change Is That Bad: It’s A Trap!

So VMware announced last week the new vSphere 5. There were lots of new features and improvements announced, but the conversation that has resulted isn’t quite what VMware had in mind. Virtually (get it? virtually?) all discussion has centered around the new licensing scheme.

And some of us feel that the change is, well… a trap.

For pete’s sake Tony, they’re going to take your VCP away for this

There have been many, many blog posts with people weighing in. Most of the feedback on blogs and messages boards has been fairly negative.  A few have been more positive, although the positive blogs (retweeted rapid fire by a somewhat desperate-sounding @vSphere5 twitter account) have enthusiastic sounding topics like 7 Reasons Why VMware vSphere 5 vRAM Licensing Is Not As Bad As It Looks At First Glance.

My thoughts on this are it’s actually much worse than at first glance. To quote a certain raspy-voiced naval officer: “It’s a trap”.

What Changed

At the heart of the change is the new concept of vRAM. vRAM is an allotment of RAM to use for powered-up VMs. If you have 10 4 GB VMs, you’d need 40 GB of vRAM allotments to power them up.

With the old licensing, you could use as much RAM as you could cram into a box. You only paid per CPU (with limits on the number of cores per CPU depending on the license).  As RAM prices dropped, it made it very attractive to cram lots and lots of VMs into fewer hosts, requiring less power and cooling. Servers with 128-512 GB of RAM from Cisco, HP, and Dell are fairly common now.

With the new licensing, you pay for CPU like before, and they’ve removed any core limits. However you now get a limited vRAM allotment with each license. No more unlimited RAM. And that’s the part that’s a trap.

It’s About The Future

The @vsphere5 twitter account is busy retweeting posts from users who checked their licensing and found that to upgrade to vSphere 5, the current vRAM allotments are within the range of their physical RAM.  So yeah, right now your licensing may work out and you don’t need to purchase any extra licenses, especially if you’re running on hardware more than a year old.

But what about the hardware you were going to refresh too? Chances are, it’s a nice beefy blade dripping with RAM like jewels on an heiress. And that’s when you’ll need to buy more licenses to use all that RAM.

RAM Rules Every Thing Around Me (Get the DIMMS, Y’All)

We’ve gotten used to the idea that RAM was essentially free in terms of licensing. In physical terms, RAM had a cost associated to it, but it could easily offset those costs in terms of savings with regard increased density.

The biggest issue with vRAM allotments are that they’re too small today, and they’ll definitely be too small tomorrow. RAM is a depreciating asset.  RAM price continuously goes down, and servers are getting more and more of it. The issue isn’t if the new licensing model will screw you, it’s when.

VMware’s blog post on why they made the new licensing, and they included some graphs. Well here’s a graph I came up with.

Tony, you have a gift for visual aids.

The vRAM allotments are pretty small today, especially if you recently did a hardware refresh.  Chances are if you bought a brand new 2-way server recently, it probably came dripping with RAM like jewels on an heiress.

And RAM keeps getting less and less expensive.  A year or so from now, 1 TB systems (probably with no more than 2 or 4 10 core CPUs) won’t be that uncommon. And then the licensing for VMware will be really bad compared to vSphere 4.

Under current licensing, licensing a full 1TB system with 4 CPUs would cost $76,890 USD. That’s 22 licenses for a 4 CPU system.

“$76,890 a server? Our IT budgets can’t repel license increases of that magnitude!”

VMware’s Other Dick Move

I’ve already written about how VMware has dramatically changed the licensing for the free version of the vSphere 5 hypervisor (ESXi 5.0). The limit used to be 256 GB of RAM, now it’s 8 GB.  Was 256 GB too much? Sure. But 8 GB is way too low.

To those of us that have home labs, that means we’ll be looking elsewhere. I think this is going to have a dramatic impact on the number of virtual appliances released for VMware. The focus will then move to XenServer (since most virtual appliances are based on Linux).

I’ve Got A Bad Feeling About This

Let’s face it, this new licensing scheme (and the 8 GB limit on the free version of ESXi 5) are dick moves. Up until now, VMware has been very friendly to its customer base. Users have supported them, championed them, and advocated on their behalf. I’ve never felt the need to even look at another hypervisor, because VMware rocked it.

Now, I look at VMware like I look at vendors like Oracle and EMC. For IT managers and techies alike, we know there are some vendors where the relationship is symbiotic. We’ll sit at the same side of the table. Networking vendors like Juniper, F5, and even Cisco (with a few exceptions) take a cordial and symbiotic view of the customer/vendor relationship.  Some vendors *cough Oracle cough Checkpoint couch* take a more adversarial and parasitical view.

I’m not saying VMware is quite at the level of parasitic right now, but they’ve suddenly taken a step in that direction. And that means the honeymoon is over. There is a general feeling that the loyal customer base is being taken for granted, as if somehow we owe VMware for the awesomeness they provide us. Well it’s the IT managers and nerdarati that helped make VMware the company it is today. And I can’t help but feel they’ve turned on us. One might say even say, turned to the dark side.

Wandering Eye

Look VMware, I’ve got to be honest. I’ve been… I’ve been seeing other people. I downloaded XenServer the other day. It asked me to download it, and I was being polite, but considering you might limit us to 8 GB, I went ahead and installed it. And I’m liking what I see.

Don’t Uninstall Angry

A wise sysadmin once said: “Never uninstall angry”.  So I’m not making any moves yet. I’m not quite ready to break up with VMware. It’s still a dynamic ecosystem, and I hope its not too damaged by all this. It’s been almost a week since the announcements so I’ve been able to get a little perspective. I’m still disappointed, to be sure, but I’m not quite as aggro.

It’s clear VMware felt it was getting the short end of the stick with the trend towards high server density. And perhaps VMware licensing needed to be changed to reflect this. I’m not all together against vRAM pools, but I do think they need to be dramatically increased. Today they’re too small, and a year or two from now they’ll be laughable.

Yo Dawg, I Heard You Like Hypervisors…

Yo dawg, I heard you like hypervisors, so I put a hypervisor in your hypervisor.

I’ve been testing out XenServer, it’s actually quite nifty. Unfortunately, I can’t run reguar VMs while running in ESXi, as it doesn’t pass on the VT-d extensions, so only paravirtualized OSes will run.

Free ESXi! Now With 8 GB Limit!

Update 9/8/11: Check out my guide for building/buying an inexpensive ESXi host

Update 8/4/11: Woohoo!  They’ve up’d it to 32 GB of vRAM allocation (and max 32 GB of physical RAM too).

Update 7/18/11: Yes, it’s true. Here is it in writing (last FAQ entry).

Wow, this keeps getting worse and worse.

Many techies such as myself have been running ESXi 4.1 in our home labs. The free license runs full blown ESXi with some limits, such as no vmotion, no HA, no vCenter integration, etc. It’s fine for a lab environment, and perfect for a home lab where I need to test a lot of systems.

I was about to publish an article called “So You Want To Build an ESXi System?” on how to build a cheap home system. (Hint: you could build a cheap home system for about $1,000 that had 24 GB of RAM and 4 cores.)

But now, there are reports that VMware has made the free version of vSphere 5 Hypervisor (essentially it’s ESXi 5.0 renamed) absolutely useless by limiting the total amount of RAM to 8 GB per host. I’ll say that again: You can only use total of 8 GB of RAM for all your VMs on a host, no matter how much RAM it has.

8 GB? Seriously?

The licensing change for the paid version was bad enough. We used to be able to increase RAM on our licensed ESX/ESXi hosts free to increase utilization.  The same went with the free version of EXSi. Below is a screenshot of my current ESXi license. I get up to 6 cores on one CPU (I have a quad-core Intel Core i7) and up to 256 GB of RAM (my host has 18 GB).

My current ESXi 4.1 free license: 256 GB limit and no more than 6 cores per CPU

Sure, 256 GB of RAM is a bit much. I’d accept a lot lower than that for a limit, I think even 32 GB would be reasonable (although it wouldn’t be reasonable for more than a few years).  I’d even accept the current core limit (or maybe bump it up to no more than 8?)  But 8 GB? That’s a useless home lab.

Look, VMware, I know you’re in it to make a buck. I don’t fault you for that. I like money too. I too have fantasies of swimming through a vault of gold like Scrouge McDuck.

Our mutual shared goal

If this  is true it’s a huge blow the VMware community, and given that the vRAM allocation for the lowest paid license is 24 GB, the 8 GB limit for free seems about right.

I’ve been a passionate advocate of VMware, I’m VCP4 certified, and I teach courses that involve virtualization. The only VM vendor out of my mouth for the most part has been VMware. You’ve treated the community well in the past, and we the nerd class and IT managers have rewarded you handsomely with server virtualization market dominance. A fantastic ecosystem has cropped up, and you could make a good living within the VMware world. The licensing moves challenge this. All that built-up good will? It’s quickly fading fast.

“But It’s Got Electrolytes, It’s What Plants Crave…”

“Fibre Channel has what VMware craves.”

In the movie Idiocracy, among other hilarious hijinks, humanity is stuck parroting things that sound authoritative, but really don’t understand anything beyond the superficial statement.

Sound familiar? Like maybe tech?



Take for example that VMware datastores are best on Fibre Channel SAN arrays.  Now, I’ve caught myself parroting this before. In fact, here is the often parroted pecking order for performance:

  • Fibre Channel/FCoE (Best)
  • iSCSI (Wannabe)
  • NFS (Newb)

NFS has long been considered the slowest of the three options for accessing data stores in VMware, with some VMware administrators deriding it mercilessly. iSCSI has also been considered behind Fibre Channel in terms of performance.

But how true is that actually? What proof are those assertions based on?  Likely it was when Fibre Channel was rocking 4 Gbits while most Ethernet networks were a single Gigabit.  But with the advent of 10 Gbit Ethernet/FCoE and 8 Gbit FC (and even 16 Gbit FC), how true is any of this anymore?

It seems though that the conventional wisdom may be wrong. Netapp and VMware got together and did some tests to see what the performance difference was for the various ways to get access to a data store (FC, FCoE, iSCSI, NFS).

This mirrors some other performance tests by VMware, comparing 1 Gbit NFS and iSCSI to 4 Gbit FC. 4 Gbit FC was faster, but more interesting was that iSCSI and NFS were very close to each other in terms of performance.  Here’s part of the conclusion from VMware 10 Gbit smackdown (FC, FC0E, iSCSI, NFS):

All four storage protocols for shared storage on ESX are shown to be  capable of achieving throughput levels that are only limited by the capabilities of the storage array and the connection between it and  the ESX server…

Another assumption is that jumbo frames (Ethernet frames above 1,500 bytes, typically around 9,000 bytes) improves iSCSI performance.  But here’s a performance test that challenges that assumption. So it didn’t seem to matter much.

In fact, it shows that if you’re having an argument about which transports, it’s typically the wrong argument to have.  The decision on your storage array is far more important.

Another surprise in the recent batch of tests is that iSCSI hardware initiators didn’t seem to add a whole lot of benefit, and neither did jumbo frames.  Both of which were the conventional wisdom (that I myself have parroted before) in terms of iSCSI performance.

I remember the same phenomenon with Sun’s Solaris years ago, when it was nick-named “Slowlaris“.  Initially, when Solaris was released in the 90’s, it was replacing BSD-based SunOS. Like a lot of newer operating systems, it required beefier hardware, and as such the new Solaris tended to run slower than SunOS on the same hardware, hence the derisive name Slowlaris.  This ended in the late 90’s, but the name (and perception) still stuck in the naughts (00’s), despite numerous benchmarks showing Solaris going toe-to-toe with Linux.

In technology, we’re always going to get stuff wrong. Things change too often to be 100% accurate at all times, but we should be open to being wrong when new data is presented, and we should also occasionally test our assumptions.