Building a fully supported vSphere 5.5U3 host on Intel Atom for $650

Building a fully supported vSphere 5.5U3 host on Intel Atom for $650

Why am I writing this?  Why would anyone run vSphere on Intel Atom?

There are a number of use cases that spring to mind (DAAS, ROBO, Dev/Test, Labs, etc), but let’s focus on one that recently cropped up in my daily routine.

I have been working with a couple of large service provider customers who wish to provide a hosted NSX option for their customers. In general, this is relatively straightforward – assuming the customers share infrastructure, and their administrators go through some kind of portal that limits their access. However, for their highest end customers, they would like to offer a dedicated hardware option that DOES allow the customer admins direct vCenter Web Client access.

For example, each customer gets, say, 10 dedicated vSphere 6.0 U1 physical hosts to be their compute capacity. However, they would like to also include a dedicated management cluster for each customer that hosts vCenter, NSX Manager, NSX Controllers, etc. This way, they can easily grant access to the customer’s administrators for the compute cluster, and to the vCenter Web Client so that they can create VMs and NSX ESGs/DLRs/DFW rules. At the same time, they can easily prevent those same people from directly accessing the hosts and underlying VMs that those management entities are running on.

In the pre-NSX world, this wouldn’t have been such a big deal. You just have one giant shared management cluster running the vCenter VMs and each customer is only allowed port 443 access to the vCenter VM that they own. This shared management cluster is in turn managed by a shared vCenter that no one but the service provider can touch. Easy. However, NSX poses a challenge because the NSX Controllers must be deployed to the same vCenter that manages the compute capacity clusters. We dug into this with VMware product management, and while it is theoretically possible to do things like “transplant” the controllers once they’ve been deployed  – it is thoroughly unsupported.  (I tested this, btw and it does seem to *function*)

One direction we could go is to do nested ESXi hosts and give them “virtual dedicated management clusters”.  This would look something like the following:

Nested ESXi hosts for management clusters
Nested ESXi hosts for management clusters

While this isn’t supported out of the box, an RPQ for this is relatively simple to negotiate. I’ve had other service providers doing DAAS with SRM that use nested ESXi for the DR side. Its a much more well-trodden road. However, customers are leery of this when we’re talking about their highest paying clients. They want deterministic performance for their controllers and its harder to make that a reality with nested ESXi.

Therefore, we started looking at ways to deploy and operate large numbers of physical ESXi hosts that each have very modest requirements at minimal cost. Intel Atom’s Avoton architecture fits the bill nicely, but I’ve never personally worked with Atom based PCs that weren’t running some sort of Linux. I vaguely knew that vSphere supported Atom, so I dug into the details.

Three vendors have certified vSphere on Atom based servers

NEC’s DX10a-A can pack 46 Atom based servers into 2U and is certified for vSphere 5.5U1-U3.  That’s 700 vSphere hosts per rack!

Supermicro’s new X10 MicroBlade system offers an Atom based blade, the MBI-6418A-T7H that supports vSphere 5.5U1-U3.  Each blade has 4x “sub-blades” (I guess you could call them that) onboard.   That’s 112 vSphere hosts in 6U and 784 per rack.

Cisco’s UCS E-Series NCE blades support 5.5U2-6.0U1.  Its the only one that has so far been certified for vSphere 6.0, but its more of a ROBO solution (from what I can tell).

Because service providers tend to be a fan of whiteboxes, the Supermicro option has garnered the most interest.  However, before I go recommending this to a customer for hosting NSX Manager and Controllers, I need to put the Intel Atom Avoton to the test.  Turns out, Supermicro also makes a cheap Mini-ITX board that is certified for vSphere 5.5U3 – the A1SAi.

I decided to hand build one of these to do some perf testing against.  I was able to do this for $650 for a single unit.  When I priced out large quantities of superdense servers, I could get the cost down much lower as you might imagine.

Parts needed to build a fully supported ESXi host for $650

Parts that I purchased initially.  Note that you don’t need to buy a separate CPU, it comes with the MB.  Similarly, a 90W power supply comes with the case (this PS is sufficient for this build)

Parts that I inevitably had to run to the local parts store to get because they weren’t included or I didn’t like the included ones

  •   QTY 1 – Pack of tiny screws for the SSD ($5 at DE)
  •   QTY 1 – SATA III cable of reasonable length ($5 at DE)

Note that in real life I’d probably use some kind of network based storage for the VMs, and thus wouldn’t need the SSD.  However, I wanted to play around with caching and just test how good in general the onboard SATA controller was with this thing.

Assembly of the Mini-ITX host

Disassemble the case and get your stuff together.

IMG_1387

Take the SSD holder off the bottom and screw in the SSD.  Note it can hold 2.  Then screw the HD holder back on.   NOTE:  I didn’t take pictures of this step, but you must also swap out the default backplate and insert the one that came with the Supermicro MB.  It has a totally different layout, and will much harder to do later.

IMG_1391IMG_1395

Install the motherboard into the case.  There are 4 corner screws that hold it in.  No standoffs needed, the case has its own.  Connect up the power supply to the MB and install the SO-DIMMs.  Note the placement if you only have 2 is outside/outside on either side of the CPU.

IMG_1379IMG_1415

Connect up the SATA III cable and run it through the holes near the top.  Also feed through the 2 SATA power plugs (should be in that bundle coming from the PS) and flip the thing over so you can connect them to the SSD.

IMG_1406IMG_1408

Connect up the power switch, hdd light, etc to the MB.  This actually took some messing around to get right because its really hard to get your fingers in there.  Finally, button it up and put the Cruzer flash drive into one of the USB ports.  It winds up looking like this.  Sprite for scale… this thing is tiny, yet it has 4 NIC ports AND an out-of-band management interface!

IMG_1418 IMG_1417

 

Installing ESXi 5.5U3

Connect up at least one NIC and the IPMI management port to your network.  The IPMI will grab DHCP the first time and you can log into it via a web browser (default password: ADMIN/ADMIN)

supermicro-login-banner

From here you can set a static IP for it to use going forward.  It also has IP-KVM so you can console the thing remotely and install ESXi via virtual media.  The fact that you can work with a Mini-ITX box like it was an enterprise server just seems really cool to me.

Things to watch out for

Make sure you get 204 pin DDR3 ECC SO-DIMMs. The first SO-DIMMS I got were non-ECC and, while they fit in the slots, the server wouldn’t boot up.

Not all screws were included. I ended up having to scrounge up 4 tiny screws to mount the SSD.

On the Antec case, it would probably be best to simply unmount the front panel harness thing. There aren’t any USB 2.0 plugs to attach the 4 ports to

Frankly I don’t know how the PCI slot could work with this, or any mini-itx case. If you want to use that slot, you’ll need to find a case that will accommodate.  I did see a rackmount case on Supermicro’s website, but it only holds one HD, so bear that in mind.

Afterthoughts

Overall this build was extremely simple, and to be honest, I was expecting the IPMI interface to be half-functional.  It works just like a “real” enterprise ILO/DRAC/etc.

Next Up

The next article in this series will cover building a three node management cluster from these boxes and doing some perf testing with NSX and the VCSA – I want to know where this things limits are!

 

Author: sean@nsxperts.com

Leave a Reply

Your email address will not be published. Required fields are marked *