You don’t need an enterprise cluster in order to get an impression of VMware Tanzu and Kubernetes. Thanks to the Tanzu Community Edition (TCE), now anyone can try it out for themselves – for free. The functionality offered is not limited in comparison to commercial Tanzu versions. The only thing you don’t get with TCE is professional support from VMware. Support is provided by the community via forums, Slack groups or Github. This is perfectly sufficient for a PoC cluster or the CKA exam training.
Deployment is pretty fast and after a couple of minutes you will have a functional Tanzu cluster.
The TCE can be deployed in two variants either as a standalone cluster or as a managed cluster.
A fast and resource-efficient way of deployment without a management cluster. Ideal for small tests and demos. The standalone cluster offers no lifecycle management. Instead, it has a small footprint and can also be used on small environments.
Like commercial Tanzu versions, there is a management cluster and 1 to n workload clusters. It comes with lifecycle management and cluster API. Thus, declarative configuration files can be used to define your Kubernetes cluster. For example, the number of nodes in the management cluster, the number of worker nodes, the version of the Ubuntu image or the Kubernetes version. Cluster API ensures compliance with the declaration. For example, if a worker node fails, it will be replaced automatically.
By using multiple nodes, the managed cluster of course also requires considerably more resources.
TCE can be deployed either locally on a workstation by using Docker, in your own lab/datacenter on vSphere, or in the cloud on Azure or aws.
I have a licensed Tanzu with vSAN and NSX-T integration up and running in my lab. So TCE on vSphere would not really make sense here. Cloud resources on aws or Azure are expensive. Therefore, I would like to describe the smallest possible and most economical deployment of a standalone cluster using Docker. To do so, I will use a VM on VMware workstation. Alternatively, a VMware player or any other kind of hypervisor can be used.
VMware has launched a training portal that allows you to improve your skills and knowledge of Tanzu and Kubernetes. On the portal ModernAppsNinja there are many free trainings that will bring you closer to the ModernApps topic. You will find a variety of courses, labs, tutorials, learning materials and handson tutorials. For example, if you want to prepare for the Certified Kubernetes Administrator (CKA), or the VCP ModernApps, you can easily find the necessary resources and tools there. But also useful tutorials such as how to use VSCode.
Anyone who has ever been involved in the design of IT concepts based on VMware products should be familiar with the VMware Validated Design Guide (VVD).
VMware Validated Design is a collection of data center design recommendations that span compute, storage, networking, and management which can be used as a reference guide for implementing a Software-Defined Data Center (SDDC). The VVD documentation consists of a series of documents that build on each other for all stages of the SDDC lifecycle. The VVD documentation can be used as an extension of the VMware Cloud Foundation (VCF) documentation. Each version of the VVD Guide correlates with a particular VCF version.
VMware Validated Design has been discontinued after VMware Validated Design 6.2 and VMware Cloud Foundation 4.2. VMware Validated Solutions (VVS) will take over the succession of VVD.
VMware Validated Solutions
VMware Validated Solutions are validated technical implementations designed to assist in building a secure and stable infrastructure based on VCF. Each VVS includes a detailed design with design decisions, as well as implementation instructions. VMware Cloud Foundation SDDC Manager is required to implement VMware Validated Solutions.
Finally, this means that anyone interested in a VMware validated solution in the future needs to take a look at VCF.
Using GPU in container workloads is an important demand by developers who work with machine learning and artificial intelligence.
You can create a custom VM class where a VI admin can define a vGPU specification for that class. Developers can use this class to assign GPU resources to the workload. The vm class will define node placement an vGPU profile.
This not only available to GPU enabled TKG clusters, but also for standalone VMs. The use of custom classes will simplify the consumption of GPU resources in ML/AI applications.