Last Tuesday a delivery notification made me happy. Hardware shipping is on its way. Now it was time to get the cabling ready. I can’t stand Gordian knots of power cords and patch cables. I like to keep them properly tied together by velcro tape. To keep things simple, I’ve started with a non-redundant approach for vSAN traffic and LAN. Still eight patch cables that had to be labeled and bundled. Plus 4 cables for the iPMI interface. I found out later that the iPMI interface will make a fallback to the LAN interface if not connected. That’s nice. Saves me four cables and switch ports.
All four hosts came ready assembled and had accomplished a burn in test. The servers are compact and have the size of a small pizza box. They’re 25,5 cm wide, 22,5 cm deep and 4,5 cm high. But before I’m going to press power-on, I need to have a look under the hood. 🙂
Let’s start with the rear side. As you can see in the picture, there are plenty of interfaces for such a small system. In the lower left corner there’s the 12V connector which can be fastened by a screwcap. Then there are two USB 3.0 connectors and the iPMI interface above them. The iPMI comes with console and video redirection (HTML5 or Java). No extra license needed.
Then we have 4x 1 Gbit (i350) ports and four 10 Gbit (X722) ports. Two of which are SFP+. In the lower right there’s a VGA interface. Thanks to console redirection this is not necessary. But it is good to have one in emergencies.
Testing software and playing with new technologies is a crucial part of my business. Some solutions can be deployed to a simple VMware Workstation VM, but others may require complex server and networking architectures. In the past I did most of my tests with nested vSphere or vSAN clusters. Well, it works…. somehow… but you might imagine that a nested vSAN cluster with virtual flash devices, based on spinning (SATA) disks sucks err.. does not perform very well.
I needed some bare metal to perform realistic testing, so I kept looking for phased out customer servers. The problem is, that many customers use their ESXi hosts until they literally fall apart or drop out of HCL. Hardware that isn’t capable of running latest VMware products is just scrap iron. Furthermore rackmount servers are usually noisy, energy hungry and require a lot of space. Not the best choice to put it in your office.
I’ve been searching for a while for a more compact solution. Intel NUC series looked like a possible candidate. I know they’re quite popular in the vCommunity, but what kept me from buying, was its lack of network adapters an the limited ability to install caching and storage devices.
Earlier this year I got a hint to look at Supermicro E300-9D series. This micro server looked promising. Still small, but equipped with 8 NICs (four of which are 10G) and M.2 connectors for NVMe flash devices. William Lam has posted an excellent article about the E300-9D. This little gem can be equipped with a SATA DOM boot device, up to 3 NVMe devices AND it is listed on VMware HCL. How cool is that?!