GPU enabled K8s Clusters in vSphere with Tanzu

Using GPU in container workloads is an important demand by developers who work with machine learning and artificial intelligence.

You can create a custom VM class where a VI admin can define a vGPU specification for that class. Developers can use this class to assign GPU resources to the workload. The vm class will define node placement an vGPU profile.

This not only available to GPU enabled TKG clusters, but also for standalone VMs. The use of custom classes will simplify the consumption of GPU resources in ML/AI applications.

See a sample class below

kind: TanzuKubernetesCluster
apiVersion: run.tanzu.vmware.com/v1
metadata:
  name: GPU-Cluster
spec:
  topology:
    workers:
      count: 3
      class: gpu-vmclass
  distribution: v1.20.2

This class can be consumed for example in a VM

kind: VirtualMachine
metadata:
  name: gpu-vm
  namespace: tkg-dev
spec:
  networkInterfaces:
  - networkName: "dev-network"
    networkType: vsphere-distributed
  classname: gpu-vmclass
  imageName: ubuntu-custom-gpu    
  storageClass: GPU-vm-policy

This blogpost used to be part of my recent vSphere7 Update3 What’s new artice, but has been withdrawn at VMware’s request with an extended embargo until October 5 2021.

vSphere 7 Update 3 – What’s New

This blogpost was under embargo until 28th of September 2021 8:00am (PT) / 17:00 (CEST). The fact that you can read this now means that vSphere 7 Update 3 has (probably) already been released.

[Update 29th Sept 2021]: Download is not yet available. Maybe we need to wait until VMworld2021 next week.

What’s New

VMware vSphere 7 Update3 comes with a wide range of innovations. They can be categorized into the sections below:

  • Tanzu with Kubernetes
  • Lifecycle, Upgrade and Patching
  • Artificial Intelligence & Machine Learning
  • Resource Management
  • Availability & Resiliency
  • Security & Compliance
  • Guest OS and Workloads
  • Storage
  • Networking
  • vSphere Management & APIs

Another bunch of features goes into vSAN. But these features will be covered in an extra post.

Continue reading “vSphere 7 Update 3 – What’s New”

VMware Bitfusion and Tanzu – Part 3: Utilize GPU from Kubernetes Pods and TKGS

This will be a multi-part post focused on the VMware Bitfusion product. I will give an introduction to the technology, how to set up a Bitfusion server and how to use its services from Kubernetes pods.

We saw in parts 1 and 2 what Bitfusion is and how to set up a Bitfusion Server cluster. The challenging part is to make this Bitfusion cluster usable from Kubernetes pods.

In order for containers to access Bitfusion GPU resources, a few general conditions must be met.

I assume in this tutorial that we have a configured vSphere-Tanzu cluster available, as well as a namespace, a user, a storage class and the Kubernetes CLI tools. The network can be organized with either NSX-T or distributed vSwitches and a load balancer such as the AVI load balancer.

In the PoC described, Tanzu on vSphere was used without NSX-T for simplicity. The AVI load balancer, now officially called NSX-Advanced load balancer, was used.

We also need a Linux system with access to Github or a mirror to prepare the cluster.

The procedure in a nutshell:

  • Create TKGS cluster
  • Get Bitfusion baremetal token laden and create K8s secret
  • Load Git project and modify makefile
  • Deploy device-plugin to TKGS-cluster
  • Pod deployment
Continue reading “VMware Bitfusion and Tanzu – Part 3: Utilize GPU from Kubernetes Pods and TKGS”

VMware Bitfusion and Tanzu – Part 2 : Bitfusion server setup

This will be a multi-part post focused on the VMware Bitfusion product. I will give an introduction to the technology, how to set up a Bitfusion server and how to use its services from Kubernetes pods.

Bitfusion Server setup preparation

A Bitfusion Server Cluster must meet the following requirements:

  • vSphere 7 or later
  • 10 GBit LAN at least for the Bitfusion data traffic for smaller or PoC deployments. High bandwidth and low latency are essential. 40 Gbit or even 100 Gbit are recommended.
  • Nvidia GPU with CUDA functionality and DirectPath I/O support:
    • Pascal P40
    • Tesla V100
    • T4 Tensor
    • A100 Tensor
  • At least 3 Bitfusion server per cluster for high availability

This setup guide assumes that the graphics cards have been deployed to the ESXi 7+ servers and the hosts have joined a cluster in vCenter.

Continue reading “VMware Bitfusion and Tanzu – Part 2 : Bitfusion server setup”