Inexpensive VMware ESXi (vSphere Hypervisor) Host
September 8, 2011 39 Comments
So you want an ESXi (vSphere Hypervisor) server, but you don’t want to spend several grand on a blade chassis or enterprise-grade server. Perhaps you want an inexpensive server for home use, and something that’s going to be quieter than the jet-engines that cool the big stuff. So what do you do?
Like a lot of people, you’ll hit up a message board or two. And invariably, someone will link you to the vm-help.com site’s whitebox HCL. It’s a good list if you have a pile of hardware and you want to see what works, but if you’re looking for which system to buy or what components to obtain to build your own system, it’s mostly useless.
One route I’m particularly fond of is desktop hardware. It’s the least expensive way to get a virtualization host, and they’re a heck of a lot quieter than most servers, making them appropriate for putting them in places that aren’t a data center.
So I’ve written this guide on how to build/spec/buy your own ESXi system.
First off, ESXi is available for free. With ESXi 4.x, you were limited to 6 cores and 256 GB of RAM. With ESXi 5, there’s no core limit although you’re limited to 32 GB of RAM (that shouldn’t be a problem). The free license lets you run as many VMs as you can stuff in there, although you can’t do any of the fancy features like integrate with vCenter or vMotion and other fun stuff. But for a home lab or basic virtualization environment, that’s no big deal.
The issue with ESXi is that it’s somewhat picky about the hardware it will run on (although it’s improved with ESXi 4 and 5). Most server-grade hardware will run fine, but if you’re looking to use something a little more pedestrian, some of the out-of-the-box components may not work, or may not have the features you need.
System Itself
Whether you’re shopping for a motherboard or a pre-built system, I’ve yet to find a fairly recent mid to high-end system that doesn’t load ESXi. Usually the only thing I’ve had to add is a server-grade NIC (recommendations in the NIC section).
RAM Rules Everything Around Me
For virtualization, RAM is paramount. Get as much RAM as you can. When researching a system, it’s a good idea to make sure it has at least 4 memory DIMM slots. Some of the i7 boards have 6, which is even better. Most 4 DIMM slot motherboards these days can have up to 16 GB of RAM, 6 DIMM slots can do up to 24 GB. Given RAM prices these days, it doesn’t make sense not to fill them to the brim with RAM. I can get 16 GB of good desktop memory for less than $100 USD now.
Haters gonna hate
Also, make sure to get good RAM, and don’t necessarily go for fast RAM, and it’s definitely not a good idea to overclock RAM in a virtualized environment. Remember, this is a virtualization host, not a gaming rig. I tried once using off-brand RAM, and all I got was a face full of purple-screens of death.
Processors
For a long time, it seems that processors got all the glory. With virtualization and most virtualization workloads, processors are secondary. Cores are good, speed is OK. RAM is usually the most important aspect. That said, there are a couple of things with processors you want to keep in mind. On the Intel side, new processors such as i5’s or i7’s make good virtualization processor.
Cores
The more cores the better. These days, you can easily afford a quad-core system. I wouldn’t worry too much about the core speed itself, especially on a virtualization system. I would recommend putting more emphasis on the number of cores.
Then there’s hyper threading. A processor with hyper threading support will make 4 cores look like 8 to the the operating system. Each core would have two separate execution contexts. VMware makes pretty good use of hyper threading by being able to put a vCPU (what the virtual machine sees as a core) on each context, so get it if you can.
Proof that I’m, like, technical and stuff.
But again, for most VM workloads don’t go for a monster processor if it means you can only afford 4 GBs of RAM. RAM first, then processor.
Here’s where it gets a bit tricky. There are a number of new features that processors may or may not have that will affect your ability to have a functioning ESXi host.
Virtualization Support
Virtually (get it?) every processor has some sort of virtualization support these days. For Intel, this is typically called VT-x. For AMD processors, this is called AMD-V. You’ll want to double-check, but any processor made in the past five years likely supports these technologies. While there are ways to do virtualization (paravirtualization) on processors without these features, it’s not nearly as full featured and most operating systems won’t run.
Direct Connection
Some hypervisors allow a virtual machine to control a peripheral directly. VMware calls this DirectPath I/O, and other vendors have other names for it. If you had a SATA drive in your virtualization host, you could present it directly to a particular VM. You can also do it for USB devices, network interfaces, crypto accelerators, etc.
Keep in mind if you do this, no other VM will be able to access that particular device. Doing DirectPath I/O (or similar) usually prevents vMotion form occuring.
It’s a somewhat popular feature if you’re building yourself a NAS server inside a virtual system, but otherwise it’s not that popular in the enterprise.
Intel calls this technology VT-d, and AMD calls it IOMMU. Not all processors have these features, so be sure to check it out. You may also need to enable this in your system’s BIOS (sometimes it’s not on by default). For instance, my new Dell XPS 8300 has an i5-2300 processor. It does not support VT-d, although the i5-2400 processor does.
For most setups it’s not a big deal, but if you’ve got a choice, get the processor with the VT-d/IOMMU support.
AES-NI
This is a lesser known feature, AES-NI is an Intel-only feature right now (although AMD is supposed to support something like it in the upcoming Bulldozer processor family).
AES-NI first appeared in laptop and server processors, but is now making it’s way into all of Intel’s chips. Essentially, it’s an extra set of processor instructions specifically geared towards AES encryption. If you have software that’s written to specifically take advantage of these instruction sets, you can significantly speed up encyrption operations. Mac OS X Lion for instance uses AES-NI for File Vault 2, as does BitLocker for Windows 7. Best of all, it’s passed down to the guest VMs so they can take advantage of it as well.
Networking
VMWare has been notoriously picky about its networking hardware. The built-in Ethernet ports on most desktop-class motherboards won’t cut it. What you’ll likely need is to buy a server-grade network adapter, and with ESXi 4.0 and on, it will have to be Gigabit or better (there’s no more Fast Ethernet support). ESXi won’t even install unless it detects an appropriate NIC.
This has changed somewhat with ESXi 5.0. The ESXi 5.0 installer comes pre-loaded with more Ethernet drivers than its predacessors, and accept more mundane NICs. On my Dell XPS 8300, the built-in BCM57788 Broadcom-based NIC was recognized with the default ESXi 5.0 installer. It was not with 4.1.
If your NIC is not recognized, my favorite go-to NIC for VMware systems is an Intel Pro 1000 NIC. I can pick a single or dual port NIC off eBay for less than $50 usually. Make sure to get the right kind (most modern systems use PCI-e). Make sure it’s the Pro NICs, and not the desktop NICs. I have a couple of Intel Pro 1000 PTs for my ESXi boxen that I got for $40 a piece, and they work great.
We’ll have to see how many more drivers ESXi 5.0 support, but chances are you’ll need to pick up an Intel Pro 1000 NIC if you’re using 4.x with desktop hardware.
Storage
Standard SATA drives work fine in ESXi 4 and 5. For a desktop system, it doesn’t make much sense usually to try to put a SAS drive, but it’s your money.
If you’re going to use desktop or basic server hardware, there’s something to keep in mind, something that may surprise you: The RAID you think your motherboard has isn’t hardware RAID — it’s software RAID. Even if you set it up in the BIOS, it’s still software RAID. VMware doesn’t support it. If you want hardware RAID, you’ll need to buy a separate RAID controller (SATA or SAS). Typically these RAID controllers are a couple hundred dollars.
But lets say you’re going to host a NAS device inside your virtualized environment, with something like FreeNAS or OpenFiler. Here’s an option: Put three SATA drives into your host. Each drive would then become a data store. Create a virtual machine, and give it three drives, each one contained on a different data store. You can then create a RAID 5 array from the operating system. This would all be software RAID, but performance should be fine.
SSD drives are dropping in price. Unfortunately, none of the hypervisor vendors I know of support the TRIM feature (important for keeping performance good with an SSD). I wouldn’t recommend using an SSD as a datastore just yet.
You can also use a NAS device (such as a Drobo or Synology) as either an iSCSI or NFS server and store your virtual machines there. That would get you RAID protection as well as some flexibility and the ability to run a cluster.
Pre-built or Build Your Own
The allure of building your own system is strong, especially for the nerd core set. But you have to be a bit careful. The last time I tried to build my own ESXi host, this was the result.
Things went horribly, horribly wrong
The picture above is from the actual build I did. There is a samurai sword, a bottle of absinthe, and three server carcasses. I won’t say how each was used, as you can see things did not go as planned. I eventually got the system up and running, but I wasted about two days of effort. If I’d just bought a pre-built system and added some RAM, I could have saved a lot of time.
Also, I really didn’t save any money by building it myself, even if you don’t account for the wasted time. I recently purchased a Dell XPS 8300 with 6 GByte of RAM, 1.5 TB HD, 4 core i5-2300 processor as a virtualzation system. It cost me about $650 USD at Best Buy. Another $90 will get that system to 16 GB of RAM. Building a similar system on Newegg probably would cost a bit more.
But your time is yours, and perhaps your server builds don’t include ancient weapons and high-proof spirits. If so, more power to you.
Some may scoff at the hardware choices here, and that’s fine. I wouldn’t run a Fortune 500 company off the components discussed here, but that’s not what this equipment is for. This is for the smaller side of the SMB, home labs, and dev/test labs. And for that, it works beautifully.














