Update Tanzu Workload Management

This is a brief guide on how to upgrade Tanzu Workload Management within the vSphere cluster.

Kubernetes Release and Patch Cycles

Kubernetes versions are specified as x.y.z following Semantic Versioning terminology, where x is the major version, y is the minor version, and z is the patch version. For example, v1.22.6 denotes a minor version 22 with patch level 6. Minor versions are released approximately every 3-4 months. In the meantime, there are several patches within the minor version.

The Kubernetes project maintains release branches for the last three minor versions (1.24, 1.23, 1.22). Since Kubernetes 1.19, newer versions receive patch support for about a year. So keeping the Kubernetes versions in Tanzu up to date is highly recommended.

Step 1 – Update vCenter

This step is not mandatory, but recommended. Updates on vCenter are often accompanied by a new Kubernetes versions. You can see notifications about updates in the vSphere Client.

Continue reading “Update Tanzu Workload Management”

Upgrade of a K3s Lightweight Kubernetes Cluster

K3s is a lightweight, highly available open source Kubernetes cluster platform designed for easy and resource-efficient installation. K3s is provided in a package of less than 60 MB. The package is optimized for ARM platforms and can therefore also be run on hardware such as a Raspberry Pi, or as a guest VM on ESXi-on-ARM.

Prerequisites and collection of information

K3s is a cluster solution. That is why the order in which the nodes are updated is important. The update starts on the master node. So first we need to find out which node has this role. The easiest way to do this is with a kubectl command:

kubectl get node
NAME STATUS ROLES AGE VERSION
k3node1.lab.local Ready master 2y43d v1.19.3+k3s3
k3node2.lab.local Ready none 2y42d v1.19.3+k3s3
k3node3.lab.local Ready none 2y42d v1.19.3+k3s3

From the output above we see my three K3s nodes with FQDN, status, role, age and version. So here k3node1 has the master role.

As an alternative, you can also execute the command in verbose mode:

kubectl get node -o wide
Continue reading “Upgrade of a K3s Lightweight Kubernetes Cluster”

Heads-up: Linux Start- and Reboot-Issues with Maxtang NX6412 solved

Cohesity vExpert gift

I recently became the owner of a Maxtang NX6412-B11 Mini PC. Cohesity gave away these barebones to vExperts at the VMware Explore EMEA in Barcelona. Once again a big thank you to Cohesity for their support of the community!

The fanless MiniPC with Elkhart Lake chipset is well-equipped. It has 2x 1 Gbit LAN, 1x USB-C (front), 2x USB 3.2 (front), 2x USB 2.0, 2x HDMI 2.0, and an audio jack.

Featured ports on the rear side.

The MiniPC will be a great addition to my homelab. I had intended to install the Tanzu community edition for it. Unfortunately, the project has since been discontinued by VMware and the removal of the packages from GitHub has been announced. πŸ™

Hardware finish

The barebone still had to be provided with RAM and a flash disk. I installed a Samsung SSD 860 EVO Series 1TB M.2 SATA and two SO-DIMM DDR4 3200 16 GB from Crucial.

Reboot Issues with Linux

With the SATA SSD and the RAM, the machine was ready to boot. Ubuntu 22.04 LTS was used as operating system. After installation, a usual reboot was requested. However, the PC did not shut down completely and remained in the “Reached target shutdown” state. The PC had to be powered off hard. The reboot also took several minutes, which is very unusual for Ubuntu. To rule out the possibility that the problem is specific to Ubuntu, I tried an installation with Fedora. The result was exactly the same here too.

The solution

After a lengthy search, I found a clue that was specific to the EHL hardware platform. The fix is to disable a kernel module for the Intel Elkhart Lake SoC chipset. This can be done by adding it to the blacklist.conf file.

sudo vi /etc/modprobe.d/blacklist.conf

The line below must be added to blacklist.conf:

blacklist pinctrl_elkhartlake

Quit the vi editor with [ESC] [:] wq! (save and exit)

update-initramfs –u

The next shutdown was still delayed, but after a cold boot the OS came up within a few seconds.

I hope this hint helps someone – especially my vExpert colleagues who received the Cohesity gift too. Sharing is caring. πŸ™‚

Manage ESXi Coredump Files

Okay, admit it, this is not a new topic, but it cost me some time in a client project. Since this blog also acts as a swap partition of my brain, I wrote it down for future reference. It is important to follow the steps correctly so that the changes are preserved after a reboot.

Why a Coredump-File?

Modern ESXi installations starting with version 7 use a new partition layout of the boot device. Coredumps are also located there. But only when the boot medium is not a USB flash medium and not an SD card. In such cases the coredump is relocated to a VMFS datastore with at least 32GB capacity.

This is exactly the case I found in a customer environment. The system was migrated from vSphere 6.7 and therefore still had the old boot layout on a ( at that time still fully supported) SD-Card RAID1. We found a vmkdump folder with files for each host on one of the shared VMFS datastores. This (VMFS5) datastore was supposed to be decommissioned and replaced with a VMFS6 datastore. (Side note from the VCI: there is no online migration path from VMFS5 to VMFS6) πŸ˜‰ So the vmkdump files had to be removed from there.

Procedure

First, we get an inventory of the coredump files.

esxcli system coredump file list

All coredump files of all ESXi hosts are listed here. Each line contains the path and the Active and Configured (true or false) states. Active means that this is the current coredump file of this host. It is important that the value for Configured also has the status ‘true’. Otherwise the setting will not survive a reboot. Only the coredump file of the current host has the status ‘active’. All other files belong to other hosts and are therefore active=false.

By default, the host chooses the first matching VMFS datastore. This is not necessarily the desired one.

Remove the current Coredump-File

First we delete the active coredump file of the host. We have to force the removal because it is set as active=true.

esxcli system coredump file remove --force

If we execute the list command from above again, there should be one line less.

Add a new Coredump File

The next command creates a new coredump file at the destination. If it does not already exist, a vmkdump folder is created and the dumpfile is created in it. We specify the desired file name without extension, because it will be created automatically (.dumpfile).

esxcli system coredump file add -d <Name | UUID> -f <filename>

Example: Name of the host is “ESX-01” and the VMFS datastore has the name “Service”. The datastore may be specified as either DisplayName or Datastore_UUID.

esxcli system coredump file add -d Service -f ESX-01

A folder vmkdump will be created on the designated datastore and a file named ESX-01.dumpfile will be created in it. We can check this using the list command.

esxcli system coredump file list

A new line will appear with the full path to the new dumpfile. However, the status is still active=false and configured=false. It might be useful to copy this full path to the clipboard, because it is required in the next step.

Activate Dumpfile

In the following step, we set the created dumpfile to active. This way, the setting is retained even after a host reboot. We specify the complete path to the dumpfile. The copy from the clipboard is helpful here and avoids typos.

esxcli system coredump file set -p <path_to_dumpfile>

Example:

esxcli system coredump file set -p /vmfs/volumes/<UUID>/vmkdump/ESX-01.dumpfile

A final List command validates the result.

Links