GPU enabled K8s Clusters in vSphere with Tanzu

Using GPU in container workloads is an important demand by developers who work with machine learning and artificial intelligence.

You can create a custom VM class where a VI admin can define a vGPU specification for that class. Developers can use this class to assign GPU resources to the workload. The vm class will define node placement an vGPU profile.

This not only available to GPU enabled TKG clusters, but also for standalone VMs. The use of custom classes will simplify the consumption of GPU resources in ML/AI applications.

See a sample class below

kind: TanzuKubernetesCluster
apiVersion: run.tanzu.vmware.com/v1
metadata:
  name: GPU-Cluster
spec:
  topology:
    workers:
      count: 3
      class: gpu-vmclass
  distribution: v1.20.2

This class can be consumed for example in a VM

kind: VirtualMachine
metadata:
  name: gpu-vm
  namespace: tkg-dev
spec:
  networkInterfaces:
  - networkName: "dev-network"
    networkType: vsphere-distributed
  classname: gpu-vmclass
  imageName: ubuntu-custom-gpu    
  storageClass: GPU-vm-policy

This blogpost used to be part of my recent vSphere7 Update3 What’s new artice, but has been withdrawn at VMware’s request with an extended embargo until October 5 2021.

VMware Bitfusion and Tanzu – Part 3: Utilize GPU from Kubernetes Pods and TKGS

This will be a multi-part post focused on the VMware Bitfusion product. I will give an introduction to the technology, how to set up a Bitfusion server and how to use its services from Kubernetes pods.

We saw in parts 1 and 2 what Bitfusion is and how to set up a Bitfusion Server cluster. The challenging part is to make this Bitfusion cluster usable from Kubernetes pods.

In order for containers to access Bitfusion GPU resources, a few general conditions must be met.

I assume in this tutorial that we have a configured vSphere-Tanzu cluster available, as well as a namespace, a user, a storage class and the Kubernetes CLI tools. The network can be organized with either NSX-T or distributed vSwitches and a load balancer such as the AVI load balancer.

In the PoC described, Tanzu on vSphere was used without NSX-T for simplicity. The AVI load balancer, now officially called NSX-Advanced load balancer, was used.

We also need a Linux system with access to Github or a mirror to prepare the cluster.

The procedure in a nutshell:

  • Create TKGS cluster
  • Get Bitfusion baremetal token laden and create K8s secret
  • Load Git project and modify makefile
  • Deploy device-plugin to TKGS-cluster
  • Pod deployment
Continue reading “VMware Bitfusion and Tanzu – Part 3: Utilize GPU from Kubernetes Pods and TKGS”

VMware Bitfusion and Tanzu – Part 2 : Bitfusion server setup

This will be a multi-part post focused on the VMware Bitfusion product. I will give an introduction to the technology, how to set up a Bitfusion server and how to use its services from Kubernetes pods.

Bitfusion Server setup preparation

A Bitfusion Server Cluster must meet the following requirements:

  • vSphere 7 or later
  • 10 GBit LAN at least for the Bitfusion data traffic for smaller or PoC deployments. High bandwidth and low latency are essential. 40 Gbit or even 100 Gbit are recommended.
  • Nvidia GPU with CUDA functionality and DirectPath I/O support:
    • Pascal P40
    • Tesla V100
    • T4 Tensor
    • A100 Tensor
  • At least 3 Bitfusion server per cluster for high availability

This setup guide assumes that the graphics cards have been deployed to the ESXi 7+ servers and the hosts have joined a cluster in vCenter.

Continue reading “VMware Bitfusion and Tanzu – Part 2 : Bitfusion server setup”

VMware Bitfusion and Tanzu – Part 1: A primer to Bitfusion

This will be a multi-part post focused on the VMware Bitfusion product. I will give an introduction to the technology, how to set up a Bitfusion server and how to use its services from Kubernetes pods.

What is Bitfusion?

In August 2019, VMware acquired BitFusion, a leader in GPU virtualization. Bitfusion provides a software platform that decouples specific physical resources from compute servers. It is not designed for graphics rendering, but rather for machine learning (ML) and artificial intelligence (AI). Bitfusion systems (client and server) only run on selected Linux platforms as of today and support ML applications such as TensorFlow.

Why are GPUs so important for ML/AI applications?

Processors (Central Processing Unit / CPU) in current systems are optimized to process serial tasks in the shortest possible time and to switch quickly between tasks. GPUs (Graphics Processor Units), on the other hand, can process a large number of computing operations in parallel. The original intended application is in the name of the GPU. The CPU was to be offloaded by GPU in graphics rendering by outsourcing all rendering and polygon calculations to the GPU. In the mid-90s, some 3D games could still choose to render with CPU or GPU. Even then, it was a difference like night and day. GPU could calculate the necessary polygon calculations much faster and smoother.

A fine comparison of GPU and CPU architecture is described by Niels Hagoort in his blog post “Exploring the GPU Architecture“.

However, due to their architecture, GPUs are not only ideal for graphics applications, but for all applications where a very large number of arithmetic operations have to be executed in parallel. This includes blockchain, ML, AI and any kind of data analysis (number crunching).

Continue reading “VMware Bitfusion and Tanzu – Part 1: A primer to Bitfusion”