Update error VCSA 7 – vCenter Server not operational

During patching of a vCenter server appliance (VCSA) problems can occur. Maybe contact to the update source was lost or the whole process has been cancelled by an operator. If you try to reapply the patch, you might see an error like in the picture below.

Update Installation failed. VCenter Server is not operational.

In the VAMI interface of vCenter everything looks fine. All services are up and running and ovarall status is green. Even a reboot of the appliance doesn’t help. The source of the problem lies in an interrupted update procedure which leaves a status file behind. We need to fix (remove) that manually.

To do so open a SSH shell to the vCenter server appliance and change to the directory where the file was left.

# cd /etc/applmgmt/appliance

You’ll see a file called software_update_state.conf. Under normal circumstances this file will be removed after an update. But something went wrong and it wasn’t cleaned up. Let’s have a brief look inside the file.

# cat software_update_state.conf
"state": "INSTALL_FAILED",
"version": "",
"latest_query_time": "2020-09-17T11:42:37Z"

We can see that there’s been a failed update to VCSA You can just remove the file.

# rm software_update_state.conf

If you now trigger a new patch installation it will work.

Using more than one dvSwitch for overlay traffic in a VCF 4.0.1 VxRail cluster

SDDC-Manager is the central management tool in a vCloud Foundation (VCF) environment. You can add workload domains, import clusters to workload domains (WLD) or add Kubernetes namespaces. For every task there’s workflow in the GUI of SDDC-Manager.

Currently, as of version VCF 4.0.1, it is not possible to add a cluster with more than two uplinks and more than one vdSwitch to a WLD. If you try to do that in the GUI, you can only define one dvSwitch with two uplinks.

What now?

There’s help inside SDDC-manager.

Continue reading “Using more than one dvSwitch for overlay traffic in a VCF 4.0.1 VxRail cluster”

VMUG Germany bi-weekly virtual events – Session 7: VMware Cloud Native & Open Source Engagements

On August 12th 2020 VMUG Germany will host another virtual event in its bi-weekly series with Bjoern Brundert who is going to talk about VMware Cloud Native & Open Source Engagements.

Bjoern Brundert
Principal Solution Engineer, Application Platforms,
Office of the CTO, Global Field, VMware

Link to registration

In cooperation with several local German, Swiss and Austrian VMUG chapters we are hosting short and crisp bi-weekly virtual events.

One speaker, one topic, one hour.

Every second Wednesday from 17:00 to 18:00 CEST we’re presenting interesting talks about new developments, products and trends around the VMware ecosphere.

Preview to further events in this series

Wednesday, August 27th 2020 at 5 p.m. CEST – Developing modern applications with VMware Tanzu Build Portfolio (German language)

Speaker: Ulrich Hoelscher, Senior Platform Architect Modern Apps BU, VMware

Strange thermal Issue after Update to ESXi 7.0b

Patch build 16324942 for ESXi 7.0 has been released on June 23rd 2020. It will raise ESXi 7.0 GA to ESXi 7.0b. As usual I’m patching my homelab systems ASAP. As all hosts are fully compliant with HCL, I chose a fully automated cluster remediation by vSphere Lifecycle Manager (vLCM).

The specs

ServerSuperMicro SYS-E300-9D-8CN8TP
ESXi7.0 GA build 15843807 (before) / 7.0b build 16324942 (after)
HCL compliantyes

During host reboot I realized a temperature warning LED on the chassis. A look into IPMI revealed a critical CPU temperature state. Also the fans responsible for CPU airflow ran at maximum speed.

As you can see, system temperature was moderate and fans usually run at low to medium speed under these conditions. Air intake temperature was 25°C.

My ESXi nodes rebooted with the new build 16324942 and there were no errors in vLCM. But I could hear there’s somethin wrong. A fan running at speed over 8000 RPM will tell you there IS something to look after. Also the boot procedure took much longer than usual.

I quickly shut down the whole cluster in order to avoid a core meltdown.

Continue reading “Strange thermal Issue after Update to ESXi 7.0b”