Veeam Backup support for Server 2008 will end with next major release

Next major release of Veeam Backup & Replication will no longer support several Windows versions. That was announced by Anton Gostev in his weekly forum digest on Feb. 25th 2019.

Veeam Backup & Replication components will no longer support being installed on Windows Server 2008 SP2, Windows 8.0 and Windows 10 1507/1511. However, Windows Server 2008 R2 SP1, Windows 8.1 and Windows 10 (1607 or later) will continue to be supported. Also, Microsoft Windows 7 SP1 continues to be supported as before.

Server 2003 und XP guest OS affected

Application-aware processing and guest file system indexing will no longer support Windows Server 2003 and Windows XP virtual machines. However, crash-consistent backup of such VMs will of course still be supported – as generally speaking, we don’t care what’s inside those images we’re backing up (and whether there is any OS at all).

Curtains für vSphere 5.0 und 5.1

VMware vSphere 5.0 and 5.1 will no longer be supported. However, vSphere 5.5 will continue to be supported. Importantly, the new VeeamCDP functionality specifically will require vSphere 6.5 or later due to its platform dependencies.

VeeamCDP only for vSphere 6.5 and later

The long announced and postponed feature VeeamCDP will require vSphere 6.5 and later versions.

ESXi host restore with obstacles

Unable to re-join EVC cluster after restore of ESXi system

Changing boot media of ESXi hosts (unfortunately) has become a routine job. It is based on the fact, that many flash media have a limited lifespan. To be fair, I need to point out that many customers use (cheap and dirty) USB flash sticks as boot media. But what is good in a homelab, turns out to be a bad idea in enterprise environments.

The usual procedure for media replacement is fairly simple:

  • export host configuration
  • evacuate and shut down host
  • prepare fresh boot medium with installation ISO that has the same or lower patchlevel as the old installation
  • boot freshly installed host
  • apply (intermediate) IP address if no DHCP available
  • restore host configuration
  • re-connect to cluster
  • apply patches if neccessary

So far so good. But last week I had a nasty experience with a recovered ESXi host. Continue reading “ESXi host restore with obstacles”

Automatic Segmentation of VDI Endpoints

Automatic VLAN assignment and use of DHCP relays

Software defined datacenters (SDDC) enable us to keep many components within the hypervisors software layer. But sooner or later we need to exit that layer in order to get in touch with the user. Usually Thin- or Zeroclients are used as VDI endpoints. Those hardware boxes are connected by LAN and need to have an IP address.

I will demonstrate how to assign endpoints  to separate them into subnet segments and VLANs and still assign IP addresses by a centralized DHCP server.

Continue reading “Automatic Segmentation of VDI Endpoints”

Troubleshoot vmnic malfunction

Malfunction is worse than failure

Redundancy is key in virtual environments. If one component fails, another will jump in and take over. But what happens if a component does not really fail but isn’t working properly any more. In this case it isn’t easy to detect a failure.

I recently got a call by a friend, that he has suddenly lost all file shares on his (virtual) file server. I opened a connection to a service machine and started some troubleshooting. These were the first diagnostic results:

  • Fileserver did not respond to ping.
  • Ping to gateway was successful.
  • Name resolution against virtual DC was successful.
  • A browser session to vCenter failed and vCenter did not respond to ping.

It is a little two-node cluster running on vSphere 6.5 U2. Maybe one ESX has failed? But then HA should have restarted all affected VMs. That was not the case. So I’ve pinged both hosts and got instant reply. No, it did not look like a host crash.
Next I’ve opened the host client to have a look on VMs. All VMs were running.
I’ve opened a console session to the file server and could not login with domain credentials, but with a local account. The file server looked healthy from inside.
Now it became obvious that there was a problem with networking. But all vmnics were active and link status was “up”. The virtual standard switch on which the VM-Network portgroup resided had 3 redundant uplinks with status “up”. So where’s the problem?
I’ve found another VM that responded to ping and had internet connectivity on the same host as vCenter and the fileserver.
I opened a RDP session and from there I was able to ping every VM on the same host. Even vCenter could be connected by browser. Now the picture became clearer. One of the uplinks must have a problem, although it didn’t fail. But which one? Continue reading “Troubleshoot vmnic malfunction”