Inexpensive VMware ESXi (vSphere Hypervisor) Host

So you want an ESXi (vSphere Hypervisor) server, but you don’t want to spend several grand on a blade chassis or enterprise-grade server. Perhaps you want an inexpensive server for home use, and something that’s going to be quieter than the jet-engines that cool the big stuff. So what do you do?

Like a lot of people, you’ll hit up a message board or two. And invariably, someone will link you to the site’s whitebox HCL. It’s a good list if you have a pile of hardware and you want to see what works, but if you’re looking for which system to buy or what components to obtain to build your own system, it’s mostly useless.

One route I’m particularly fond of is desktop hardware. It’s the least expensive way to get a virtualization host, and they’re a heck of a lot quieter than most servers, making them appropriate for putting them in places that aren’t a data center.

So I’ve written this guide on how to build/spec/buy your own ESXi system.

First off, ESXi is available for free. With ESXi 4.x, you were limited to 6 cores and 256 GB of RAM. With ESXi 5, there’s no core limit although you’re limited to 32 GB of RAM (that shouldn’t be a problem). The free license lets you run as many VMs as you can stuff in there, although you can’t do any of the fancy features like integrate with vCenter or vMotion and other fun stuff. But for a home lab or basic virtualization environment, that’s no big deal.

The issue with ESXi is that it’s somewhat picky about the hardware it will run on (although it’s improved with ESXi 4 and 5). Most server-grade hardware will run fine, but if you’re looking to use something a little more pedestrian, some of the out-of-the-box components may not work, or may not have the features you need.

System Itself

Whether you’re shopping for a motherboard or a pre-built system, I’ve yet to find a fairly recent mid to high-end system that doesn’t load ESXi. Usually the only thing I’ve had to add is a server-grade NIC (recommendations in the NIC section).

RAM Rules Everything Around Me

For virtualization, RAM is paramount. Get as much RAM as you can. When researching a system, it’s a good idea to make sure it has at least 4 memory DIMM slots. Some of the i7 boards have 6, which is even better. Most 4 DIMM slot motherboards these days can have up to 16 GB of RAM, 6 DIMM slots can do up to 24 GB. Given RAM prices these days, it doesn’t make sense not to fill them to the brim with RAM. I can get 16 GB of good desktop memory for less than $100 USD now.

Haters gonna hate

Also, make sure to get good RAM, and don’t necessarily go for fast RAM, and it’s definitely not a good idea to overclock RAM in a virtualized environment. Remember, this is a virtualization host, not a gaming rig. I tried once using off-brand RAM, and all I got was a face full of purple-screens of death.


For a long time, it seems that processors got all the glory. With virtualization and most virtualization workloads, processors are secondary. Cores are good, speed is OK. RAM is usually the most important aspect. That said, there are a couple of things with processors you want to keep in mind. On the Intel side, new processors such as i5’s or i7’s make good virtualization processor.


The more cores the better. These days, you can easily afford a quad-core system. I wouldn’t worry too much about the core speed itself, especially on a virtualization system. I would recommend putting more emphasis on the number of cores.

Then there’s hyper threading. A processor with hyper threading support will make 4 cores look like 8 to the the operating system. Each core would have two separate execution contexts. VMware makes pretty good use of hyper threading by being able to put a vCPU (what the virtual machine sees as a core) on each context, so get it if you can.

Proof that I’m, like, technical and stuff.

But again, for most VM workloads don’t go for a monster processor if it means you can only afford 4 GBs of RAM. RAM first, then processor.

Here’s where it gets a bit tricky. There are a number of new features that processors may or may not have that will affect your ability to have a functioning ESXi host.

Virtualization Support

Virtually (get it?) every processor has some sort of virtualization support these days.  For Intel, this is typically called VT-x. For AMD processors, this is called AMD-V. You’ll want to double-check, but any processor made in the past five years likely supports these technologies.  While there are ways to do virtualization (paravirtualization) on processors without these features, it’s not nearly as full featured and most operating systems won’t run.

Direct Connection

Some hypervisors allow a virtual machine to control a peripheral directly. VMware calls this DirectPath I/O, and other vendors have other names for it. If you had a SATA drive in your virtualization host, you could present it directly to a particular VM.  You can also do it for USB devices, network interfaces, crypto accelerators, etc.

Keep in mind if you do this, no other VM will be able to access that particular device. Doing DirectPath I/O (or similar) usually prevents vMotion form occuring.

It’s a somewhat popular feature if you’re building yourself a NAS server inside a virtual system, but otherwise it’s not that popular in the enterprise.

Intel calls this technology VT-d, and AMD calls it IOMMU. Not all processors have these features, so be sure to check it out. You may also need to enable this in your system’s BIOS (sometimes it’s not on by default). For instance, my new Dell XPS 8300 has an i5-2300 processor. It does not support VT-d, although the i5-2400 processor does.

For most setups it’s not a big deal, but if you’ve got a choice, get the processor with the VT-d/IOMMU support.


This is a lesser known feature, AES-NI is an Intel-only feature right now (although AMD is supposed to support something like it in the upcoming Bulldozer processor family).

AES-NI first appeared in laptop and server processors, but is now making it’s way into all of Intel’s chips. Essentially, it’s an extra set of processor instructions specifically geared towards AES encryption. If you have software that’s written to specifically take advantage of these instruction sets, you can significantly speed up encyrption operations. Mac OS X Lion for instance uses AES-NI for File Vault 2, as does BitLocker for Windows 7. Best of all, it’s passed down to the guest VMs so they can take advantage of it as well.


VMWare has been notoriously picky about its networking hardware. The built-in Ethernet ports on most desktop-class motherboards won’t cut it. What you’ll likely need is to buy a server-grade network adapter, and with ESXi 4.0 and on, it will have to be Gigabit or better (there’s no more Fast Ethernet support). ESXi won’t even install unless it detects an appropriate NIC.

This has changed somewhat with ESXi 5.0. The ESXi 5.0 installer comes pre-loaded with more Ethernet drivers than its predacessors, and accept more mundane NICs. On my Dell XPS 8300, the built-in BCM57788 Broadcom-based NIC was recognized with the default ESXi 5.0 installer. It was not with 4.1.

If your NIC is not recognized, my favorite go-to NIC for VMware systems is an Intel Pro 1000 NIC. I can pick a single or dual port NIC off eBay for less than $50 usually. Make sure to get the right kind (most modern systems use PCI-e). Make sure it’s the Pro NICs, and not the desktop NICs. I have a couple of Intel Pro 1000 PTs for my ESXi boxen that I got for $40 a piece, and they work great.

We’ll have to see how many more drivers ESXi 5.0 support, but chances are you’ll need to pick up an Intel Pro 1000 NIC if you’re using 4.x with desktop hardware.


Standard SATA drives work fine in ESXi 4 and 5. For a desktop system, it doesn’t make much sense usually to try to put a SAS drive, but it’s your money.

If you’re going to use desktop or basic server hardware, there’s something to keep in mind, something that may surprise you: The RAID you think your motherboard has isn’t hardware RAID — it’s software RAID. Even if you set it up in the BIOS, it’s still software RAID. VMware doesn’t support it. If you want hardware RAID, you’ll need to buy a separate RAID controller (SATA or SAS). Typically these RAID controllers are a couple hundred dollars.

But lets say you’re going to host a NAS device inside your virtualized environment, with something like FreeNAS or OpenFiler. Here’s an option: Put three SATA drives into your host. Each drive would then become a data store. Create a virtual machine, and give it three drives, each one contained on a different data store. You can then create a RAID 5 array from the operating system. This would all be software RAID, but performance should be fine.

SSD drives are dropping in price. Unfortunately, none of the hypervisor vendors I know of support the TRIM feature (important for keeping performance good with an SSD). I wouldn’t recommend using an SSD as a datastore just yet.

You can also use a NAS device (such as a Drobo or Synology) as either an iSCSI or NFS server and store your virtual machines there. That would get you RAID protection as well as some flexibility and the ability to run a cluster.

Pre-built or Build Your Own

The allure of building your own system is strong, especially for the nerd core set. But you have to be a bit careful. The last time I tried to build my own ESXi host, this was the result.

Things went horribly, horribly wrong

The picture above is from the actual build I did. There is a samurai sword, a bottle of absinthe, and three server carcasses. I won’t say how each was used, as you can see things did not go as planned. I eventually got the system up and running, but I wasted about two days of effort. If I’d just bought a pre-built system and added some RAM, I could have saved a lot of time.

Also, I really didn’t save any money by building it myself, even if you don’t account for the wasted time. I recently purchased a Dell XPS 8300 with 6 GByte of RAM, 1.5 TB HD, 4 core i5-2300 processor as a virtualzation system. It cost me about $650 USD at Best Buy. Another $90 will get that system to 16 GB of RAM. Building a similar system on Newegg probably would cost a bit more.

But your time is yours, and perhaps your server builds don’t include ancient weapons and high-proof spirits. If so, more power to you.

Some may scoff at the hardware choices here, and that’s fine. I wouldn’t run a Fortune 500 company off the components discussed here, but that’s not what this equipment is for. This is for the smaller side of the SMB, home labs, and dev/test labs. And for that, it works beautifully.

39 Responses to Inexpensive VMware ESXi (vSphere Hypervisor) Host

  1. Pingback: Free ESXi 5 (vSphere 5 Hypervisor) Now With 32 GB | The Data Center Overlords

  2. Pingback: Free ESXi! Now With 8 GB Limit! | The Data Center Overlords

  3. Nice writeup.. I recently installed (was just giving it a second whirl) with ESXi 5 on an XPS 8100 that I got from the dell outlet for about $600. ESXi 4 would not install on this, but as you mentioned, 5 apparently has more drivers for the cards.. and even though the wireless card is not recognized (not that it matters) it installed. I’ll probably pick up a couple more of those 8300s or similar and build a little LAB cluster.

  4. Adrian L says:

    Great blog – Exactly the information I was looking for to build a mini-itx ESXi host, using something like this as the base: J&W MINIX H67M-USB3 motherboard. Apart from the limited maximum 2 simm memory limit (16GB) I can still get a i5 or i7 quad core processor, has 2 gigabit ethernet ports, which could be useful, and a x16 express slot for a better NIC if necessary. Should make a nice little host for home use.

    • RajNath says:

      Hi there,

      This is Raj from India. I first tried to buy an Intel i5 processor and that was upsetting me with the cost. They said it would cost around 40000 Rupees (740 USD) or better still i7 processor at nearly 50000 Rupees (925 USD) But I felt I would really spend a Bomb on it and switched over to AMD instead and Hey Presto I had to pay only 35000 Rupees (648 USD) to buy AMD FX with 8 core processor with 16GB RAM with 2.89 Gigs processing power and an optional over clocking adjustment for CPU (I could not have asked for more). I wonder why people still follow the foot steps of Intel. Its already 2 months that I bought it and it works like a charm. Way to go for AMD !! 🙂 🙂

  5. Dossa Blondell says:

    Excellent post. I’m so glad to see someone (who seems to know their stuff!) confirm the XPS8300 will successfully run ESXi. I’ll be sure to load ESXi5, not 4. I have the same sword – never used it for a server build before. But you’re making me think about new methods for hard drive disposal…
    Question: Does a Win7 client on ESXi run 1920×1080 video? I chose the Radeon HD6670 option from Dell,

  6. Brian says:

    Can you run Windows Server 2008R2 on this machine? Either through ESXi as VM’s or as the OS?

  7. Alex says:

    I built 3 ESXI boxes for my company running full set up of our erp test systems using IBM/LENOVO TD230 Think Server. It suuports 2x xeon and 32 GB Ram. Price is very resonable come with
    LSI hardware raid card, supporting $400+ SAS or $80 sata, can also be mixed. There is not much different in performance different from the live systems

  8. Pingback: Best affordable Intel CPU to buy for your personal virtualization project, Core i7 2600, not the 2600K! | Tinkertry

  9. Don Wuerfel says:

    Thanks for the great info. I have two questions I could use help getting answered.

    1) I’m familiar with VirtualBox which requires a host OS and then supports one or more guest OSs. Does this free VMWare ESXi require a host OS?

    2) Can hardware like a PCI video capture card (for zoneminder) be dedicated to one of the virtual machines?

    Thanks for any help!

    • Nick Kirby says:

      I had a similar question:

      Can you output the sound & video of a virtual machine to the host machine or do you HAVE to use a client?

      The possibilities of ESXi has got me very excited, I built a machine a few months ago – now considering using it for ESXi and building another one!!

      😀 😀 😀

      • GB says:

        You can only access the VM’s via a client however with Vt-d you can pass though a graphics card and USB ports to a VM and use them as the primary dislpay and inputs. Sound works the same way. This is not supported and has its limitations (lots of video acceleration does not work, only ATI’s drivers will start in the virtual environment) but works for me, I can game and run mediaportal on the ESXi machine as if its a physical machine.

    • GB says:

      1) ESXi is the OS.

      2) As long as your proccessor AND motherboard support Vt-d (vt-x is not enough) then you can pass hardware through to the client VM’s. I have a TV tunner passed through to XP as my TV server and a Digium card passed through to a Trixbox VM. Some motherobards bridge togather all the PCI slots though so all hardware has to go to the same VM, others treat each port individually.

  10. dresner says:

    You can use SSDs with a sand force controller because TRIM is handled by the SSD and not by the OS. For example, thats how anyone can put a 3rd party SSD inside their mac and still get TRIM

  11. La Bisuteríauan says:

    Have you tested teh performance with a NAS device?
    I was almost sure to purchase a Synology or build a system with Free-Nas until I googled “esxi nas low performance”

    • tonybourke says:

      Almost all measurements of performance with a NAS device is done with throughput. Copying a very large file to or from the NAS device. A regular 7200 RPM hard drive will be able to do 90 MBps by itself, give or take depending on the drive. Some NAS might choke that down to 50 MBps. That’s measurement of performance is (usually) not appropriate for ESXi. The biggest issue is IOPS. Any spindle drives are going to be very limited on IOPS. If you try to run multiple VMs that do a lot of IOPs, local or NAS storage isn’t going to perform well. If you need lots of IOPS, use an SSD.

      • GB says:

        If the drives were connected to a pass through controller I am sure the results would be different.

        You can also use Raw Device Mapping to use a drive on a sata port nativly in a VM, no virtual file system is used at all and fully bootable. This may lead to better results but its not officially supported so you have to set it up through SSH or CLI.

  12. Thank you, GB, but I am asking because I want to use VMotion with a NAS

  13. Juan says:

    Interesting the SSD comment / vs RAM.

    I have been researching and it looks SSD must be only used to swap. I have not clear if swap2ssd is a feature of the vmware free hypervisor 5, as I will go through free hypervisor first.

    My first build could be:
    shuttle SH67H7
    intel i7 3770 (because hyperthreading)
    not sure if 16 (4×4) or 32 (4×8)Gb RAM (very different prices at the time of writting)
    2 intel pro 1000 nics
    Barracuda 2Tb HD 7200

    and not sure if intel ssd 520 120Gb, only if supported by vmware free hypervisor 5.

    I have budget for the whole system, even with 32Gb RAM
    Thank you.

  14. is this a duplicated comment? Sorry

    Interesting the SSD comment / vs RAM.

    I have been researching and it looks SSD must be only used to swap. I have not clear if swap2ssd is a feature of the vmware free hypervisor 5, as I will go through free hypervisor first.

    My first build could be:
    shuttle SH67H7
    intel i7 3770 (because hyperthreading)
    not sure if 16 (4×4) or 32 (4×8)Gb RAM (very different prices at the time of writting)
    2 intel pro 1000 nics
    Barracuda 2Tb HD 7200

    and not sure if intel ssd 520 120Gb, only if supported by vmware free hypervisor 5.

    I have budget for the whole system, even with 32Gb RAM
    Thank you.

  15. Shawn says:

    Got a spare Dell Studio XPS (435mt) laying around with 8GB of RAM and a i7 proc that I’m trying to get ESXi5 installed on it. It seems to hang at “Relocating modules and starting up the kernel”. I’ve disabled the onboard audio and the 1394 controller but no joy. This has an add in video card but I’m not sure where else to look.

    • tonybourke says:

      Hrm, maybe it’s a NIC issue. Are you using the built-in NIC? Do you know what the NIC chipset is?

      • Shawn says:

        Just got ESXi 4.1 installed without any issues. This system has an embedded NIC on it. REALTEK.

      • Duncan says:

        Hi There,

        I’ve got a 435 MT as well and I also could not install vsphere 5 on it. I got stuck at the same message. It hangs at relocating modules and starting the kernel”. Were you able to resolve this or figure out what device is causing the error? I did get 4.1 to work as well but would like to upgrade.

  16. Steve says:

    A Physical Windows server and StarWind Free iSCSI software is a great way to test vMotion, DRS and HA on the cheap as the Windows server can run on just about anything.

  17. Pingback: Confluence: Math Department Computer Systems

  18. wilmark says:

    Leave the building for true nerds who live on newegg. Your builds look like stuff most of us would throw away, besides you can set some serious cuts with that cheap shit. Is that a Dell? Dells are to buy and use once… And is that a Canon DLSR????

  19. Karl says:

    Nice write up. Wish I had read this before buying my set up.
    It’s def worth noting that the motherboard needs to support VT-D as well as the processor having that function for Direct I/O. I have the 2500T i5 but a Gigabyte board with no support. Its not the end of the world though. The Gigabyte board I bought came with no onboard GFX which I stupidly did not even realise until I took it out the box. Durrrr. Instead the board has 2 Slots for X-Fire Support so the idea was to add 2 decent GFX cards one day to utilize Remote FX or VMware View should the whole world go completely virtual (Something to mess about with anyway)! Luckily, for the time being I have an old Nvidia Dual VGA card which does the trick for the moment.

    Another handy component is the Startech Trayless 4 Drive Hot swap Enclosure. Worth a look.

    I will certainly be purchasing 4 sticks of 8GB RAM now they are available and will upgrade to an i7 one day.

  20. Karl says:

    Maybe some of you will find this of interest.

  21. Josh says:

    Nice post , I have a question . I need to virtualize 5 servers at my home for my lab setup. Do u think a machine with Quad core Intel i5 (VT-x or not) with 12GB of RAM should be enough for it ? This is not a production setup so i’m not bothered about the performance much its just that it should feel comfortable to use.

  22. Dave says:

    you can buy a used power edge 2950 that has 2x Xeon Quad Core’s for about 400 bucks and go up to 16gb of much better quality ram for about 100 bucks more. that will give you a much better starting point and much more room for growth down the line.

  23. Dude says:

    Do they sell a server with a samurai sword and a bottle of hard liquor, maybe a pci-e expansion?

  24. Juan says:

    I have a similar system, as yours, sh67h7.
    Do you have experience upgrading to vSphere 5.5 ?

  25. David says:

    Incredibly useful write up! Thanks for doing it. Confirming:
    Installed ESXi 6.0 R2 on Dell XPS 8300 i7 12GB of RAM. Machine was $300 on eBay including shipping. Picked up an LSI Megaraid 9240 for $68. ESXi loaded without a hiccup, recognizing all hardware. Moved three Windows 2012 servers from incredibly noisy Dell PowerEdge 1950 to the XPS and they run like a charm.

  26. Jean Dupont says:

    Hi To you all

    i’m new to virtualisation. I’m going a budget homeLab for practice purposes.

    I’ll go with a shutlle dh110 with 16 GB of Ram and an i5 CPU.

    Where i’m lost is on storage. For now i’ll use local storage. I hope later to be able to buy a second shutlle and NAS to test clustering and things like that but i’m staying simple for now.

    But I don’t know what I should do:

    M.2 SSD + 2,5 inch HDD
    Only M.2 SSD
    M.2 SSD + 2,5 inch SSD

    I’m really lost on this and would appreciate any help.

    Thanks a lot

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: