Unclaim vSAN Disks in ESXi Host

While playing with the latest ESXi / vSAN beta, I ran into a problem. I was about to deploy a vCenter Server Appliance (VCSA) onto a single ESXi host, that was designated to become a vSAN Cluster. During initial configuration of vCenter something stalled. Needless to say that it’s been a DNS problem. 😉

That part of vCenter/vSAN deployment is delicate. If something goes wrong here, you have to start over again and deploy a new vCenter appliance. When you run the installer a second time (after you have fixed your DNS issues) you won’t see any disk devices to be claimed by vSAN. Where have they gone? Well, actually they are still there, but during the first deployment effort they were claimed by vSAN and now form a vSAN datastore. But a greenfield vSAN deployment on a first host needs disks that do not contain any vSAN or VMFS datastore.

How to release disks?

Usually you can remove Disk Groups in vCenter. But we don’t have a vCenter at this point. Looks like a chicken-and-egg problem. But we do have a host and a shell and esxcli. Start SSH service on the host and connect to the shell (e.g. Putty).

Continue reading “Unclaim vSAN Disks in ESXi Host”

vSAN Homelab Cluster Ep.2

If you’ve missed it read part 1 – Planning phase.

Unboxing

Last Tuesday a delivery notification made me happy. Hardware shipping is on its way. Now it was time to get the cabling ready. I can’t stand Gordian knots of power cords and patch cables. I like to keep them properly tied together by velcro tape. To keep things simple, I’ve started with a non-redundant approach for vSAN traffic and LAN. Still eight patch cables that had to be labeled and bundled. Plus 4 cables for the iPMI interface. I found out later that the iPMI interface will make a fallback to the LAN interface if not connected. That’s nice. Saves me four cables and switch ports.

Host Hardware

All four hosts came ready assembled and had accomplished a burn in test. The servers are compact and have the size of a small pizza box. They’re 25,5 cm wide, 22,5 cm deep and 4,5 cm high. But before I’m going to press power-on, I need to have a look under the hood. 🙂

Let’s start with the rear side. As you can see in the picture, there are plenty of interfaces for such a small system. In the lower left corner there’s the 12V connector which can be fastened by a screwcap. Then there are two USB 3.0 connectors and the iPMI interface above them. The iPMI comes with console and video redirection (HTML5 or Java). No extra license needed.

Then we have 4x 1 Gbit (i350) ports and four 10 Gbit (X722) ports. Two of which are SFP+. In the lower right there’s a VGA interface. Thanks to console redirection this is not necessary. But it is good to have one in emergencies.

Continue reading “vSAN Homelab Cluster Ep.2”

Runecast 3.0 requires elevated privileges for HCL checks

A couple of days ago Runecast Analyzer has been upgraded to version 3.0.0. With that upgrade a very important beta-feature became GA: HW Compatibility and Upgrade Simulator.

I used to run the Runecast service account with readonly privileges. It has been sufficient up to version 2.7.x. Even the hardware compatibility check (beta) did work with readonly privileges. After upgrading my appliance to version 3.0.0 (GA), I found a notification. Missing privileges..

Once you open host details and click on I/O devices tab, there’s further information.

Continue reading “Runecast 3.0 requires elevated privileges for HCL checks”

vSAN Homelab Cluster Ep.1

Planning Phase

Testing software and playing with new technologies is a crucial part of my business. Some solutions can be deployed to a simple VMware Workstation VM, but others may require complex server and networking architectures. In the past I did most of my tests with nested vSphere or vSAN clusters. Well, it works…. somehow… but you might imagine that a nested vSAN cluster with virtual flash devices, based on spinning (SATA) disks sucks err.. does not perform very well.

I needed some bare metal to perform realistic testing, so I kept looking for phased out customer servers. The problem is, that many customers use their ESXi hosts until they literally fall apart or drop out of HCL. Hardware that isn’t capable of running latest VMware products is just scrap iron. Furthermore rackmount servers are usually noisy, energy hungry and require a lot of space. Not the best choice to put it in your office.

I’ve been searching for a while for a more compact solution. Intel NUC series looked like a possible candidate. I know they’re quite popular in the vCommunity, but what kept me from buying, was its lack of network adapters an the limited ability to install caching and storage devices.

Earlier this year I got a hint to look at Supermicro E300-9D series. This micro server looked promising. Still small, but equipped with 8 NICs (four of which are 10G) and M.2 connectors for NVMe flash devices. William Lam has posted an excellent article about the E300-9D. This little gem can be equipped with a SATA DOM boot device, up to 3 NVMe devices AND it is listed on VMware HCL. How cool is that?!

Continue reading “vSAN Homelab Cluster Ep.1”