Veeam Backup v10 on vSAN 7.0

There have been many new releases in the first quarter of 2020. The long anticipated release of Veeam Backup & Recovery version 10, we’ve been waiting for since 2017 and also the latest generation of VMware vSphere. While I had vSAN 7 beta running on my homelab cluster before GA, I’ve worked with Veeam Backup 10 only in customer projects. There’s unfortunately no room for playing with new features unless the customer requests it. One of the new features of Veeam v10 is the ability to use Linux proxies and repositories. With XFS filesystem on the repository you can use the fast clone feature which is similar to ReFS on Windows.

In this tutorial I will show how to:

  • Deploy and size the Veeam server
  • Show base configuration to integrate vCenter
  • Build, configure and deploy a Linux proxy and its integration into backup infrastructure
  • Build, configure and deploy a Linux XFS repository

Using Veeam Backup on a vSAN Cluster has special design requirements. There’s no direct SAN backup on VMware vSAN because there’s neither a SAN, nor a fabric and nor HBAs. There are only two backup methods available: Network Mode (nbd) and Virtual Appliance Mode (hotadd). The latter is recommended for vSAN, but you should deploy one proxy per host to avoid unnecessary traffic on the vSAN interfaces. Hotadd also utilizes Veeam Advanced Data Fetcher (ADF).

Talking about licenses: Having Linux proxies on each host will reduce the cost of Windows licensing. One more reason to play around with this new feature. A Veeam license will be required too, but as a vExpert I can get a NFR (not for resale) license which is valid for one year. Just one of the advantages of being a vExpert. 🙂

Let the games begin. We’ll need a Veeam server that holds the job database and the main application. The proxy and repository role will be kept on individual (Linux) servers.

Continue reading “Veeam Backup v10 on vSAN 7.0”

vSAN Health – vSAN Disk Balance

If you have joined VMware Customer Experience Improvement Program (CEIP), you’re able to use Skyline-Health in your cluster. In older versions of vSphere/vSAN this feature used to be called vSphere-Health and vSAN-Health respectively. They both have been renamed to Skyline Health. You can access Skyline-Health in the vSphere-Client by navigating to Monitor > vSAN > Skyline-Health.

Today I’ve seen a warning after powering on up my homelab.

Drilling into details showed one of 4 hosts issued a warning: “Proactive rebalance is needed”.

Usually a vSAN cluster will distribute load amongst capacity disks automatically. For some reason that wasn’t the case in my homelab. But there’s help. You can click on “Configure Automatic Rebalance” directly from Skyline-Health (see picture below).

You’ll be redirected to vSAN cluster configuration. As you can see in the screenshot below, my cluster wasn’t configured for automatic rebalance.

Just move the slider and vSAN will automatically start to balance disks. A couple of minutes later the warning had switched to green. Depending on the cluster load and how imbalanced the capacity disks are, this process might take a while.

Links

VMware KB 2149809 – vSAN proactive rebalance

Why does a vSAN cluster need slack space?

I usually get a lot of questions during trainings or in the process of vSAN designs. People ask me why there is a requirement for 30% of slack space in a vSAN cluster. If you look at it without going deeper, it looks like a waste of (expensive) resources. Especially with all-flash clusters it’s a strong cost factor. Often this slack space is mistaken as growth reserve. But that’s wrong. By no means it’s a reserve for future growth. On the contrary – it is a short term allocation space, needed by the vSAN cluster for rearrangements during storage policy changes.

Continue reading “Why does a vSAN cluster need slack space?”

vSAN Homelab Cluster Ep.2

If you’ve missed it read part 1 – Planning phase.

Unboxing

Last Tuesday a delivery notification made me happy. Hardware shipping is on its way. Now it was time to get the cabling ready. I can’t stand Gordian knots of power cords and patch cables. I like to keep them properly tied together by velcro tape. To keep things simple, I’ve started with a non-redundant approach for vSAN traffic and LAN. Still eight patch cables that had to be labeled and bundled. Plus 4 cables for the iPMI interface. I found out later that the iPMI interface will make a fallback to the LAN interface if not connected. That’s nice. Saves me four cables and switch ports.

Host Hardware

All four hosts came ready assembled and had accomplished a burn in test. The servers are compact and have the size of a small pizza box. They’re 25,5 cm wide, 22,5 cm deep and 4,5 cm high. But before I’m going to press power-on, I need to have a look under the hood. 🙂

Let’s start with the rear side. As you can see in the picture, there are plenty of interfaces for such a small system. In the lower left corner there’s the 12V connector which can be fastened by a screwcap. Then there are two USB 3.0 connectors and the iPMI interface above them. The iPMI comes with console and video redirection (HTML5 or Java). No extra license needed.

Then we have 4x 1 Gbit (i350) ports and four 10 Gbit (X722) ports. Two of which are SFP+. In the lower right there’s a VGA interface. Thanks to console redirection this is not necessary. But it is good to have one in emergencies.

Continue reading “vSAN Homelab Cluster Ep.2”