Increase root partition on VCSA

First aid if VCSA root partition turns out to be too small

I recent times I frequently see vCenter server appliances (VCSA), whose root partitions ran out of free space. As a result services are unable to start after reboot. There are some tricks to free some space on root but on the long run you should increase the partition size.

Sounds simple – but it’s quite tricky and a bit dangerous. Don’t try this at home! 😉

Continue reading “Increase root partition on VCSA”

Update vSphere H5 Fling

New Version 3.36.0

Since vSphere 6.5 the HTML5 client (vSphere-Client) is an integral part of the environment and is geting updated with more and more features with every release of vCenter. It is to get rid of the infamous flash client (web-client) sooner or later.

In distributions below 6.5 there’s no HTML5 client included. But it is possible to get the functionality with a fling.

Continue reading “Update vSphere H5 Fling”

Veeam Default Repository

System choked by data – Why you should remove the default repository after installation

A typical Veeam Backup & Replication installation consists of several sub-components. There is the Backupserver with the database, there are backup proxies, Mount Server, Gateway server and Backup-repositories. Repositories are datastores which hold your backup data. Right after initial setup the installer will create a repositoty on your system partition which is the default repository. Normally your system partition isn’t very big. Maybe 100 GB or less. One of the first tasks after installation is to define a new backup repository with Terabytes of free space. Sometimes you might forget about the default repository, which is pointing at your system partition. Under certain conditions this can turn into a timebomb which I witnessed in the wild recently. Continue reading “Veeam Default Repository”

Microsoft Patch KB4088875/KB4088878 has issues with VMXNET3 adapter

March rollup disconnects Windows Server 2008R2 VMs

Microsoft’s monthly March 2018 rollup KB4088875 contains a patch KB4088878 which seems to have issues with Windows Server 2008 R2 VMs and VMXNET3 adapter. Applying the patch disconnects Windows Server 2008R2 VMs.

Sounds familiar?

Yes, indeed! There used to be an old problem with Server 2008R2 VMs which had a VMXNET3 NIC. After restoring these VMs from backup, they’ve lost their static IP and instead switched to DHCP.  The problem is known by VMware and there’s a corresponding KB1020078.

Microsoft released in a hotfix 433809 (KB2550978) to prevent the issue. But you had to switch to DHCP first before installing the hotfix and then revert to your static IP. I’ve published the procedure in 2013. This blogpost is in German but the procedure is simple:

  • change NIC from static IP to DHCP
  • apply hotfix
  • reboot
  • revert to static IP

It seems that Microsoft now has included that old hotfix into a montly rollup. At least the similarity is striking.

Fun fact

Those who had already applied the Hotfix in the past to their 2008R2 VMs, seem to be immune to the problem.

Links

 

VMware vExpert award 2018

With pleasure I’ve found an email in my inbox, telling me that I’m part of the 2018 vExpert program.

Another year vExpert

For me it’s an honor and an obligation. I will continue to share my knowledge with the community, and – time permitting – extend that effort.

VMware vExpert Program

The annual VMware vExpert award is given to individuals who have significantly contributed to the community of VMware users over the past year. The title is awarded to individuals (not employers) for their commitment to sharing their knowledge and passion for VMware technology above and beyond their job requirements.

The VMware vExpert program is VMware’s global evangelism and advocacy program. The program is designed to put VMware’s marketing resources towards your advocacy efforts. Promotion of your articles, exposure at our global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap. The awards are for individuals, not companies, and last for one year. Employees of both customers and partners can receive the awards. In the application, we consider various community activities from the previous year as well as the current year’s (only for 2nd half applications) activities in determining who gets awards. We look to see that not only were you active but are still active in the path you chose to apply for.

Links

vExpert tweets on Twitter: #vExpert

VMTN Blog – vExpert 2018 Award Announcement

VMware – vExpert Directory

New Datacore Witness against Split-Brain scenario

DataCore SANsymphony offers software defined storage with transparent mirror in active/active mode.

Recently released version 10 PSP7 now supports a witness to avoid split-brain scenarios.

The Problem

In cases where both DataCore hosts (DC1, DC2) lose mirror (MIR) paths and LAN-connection, a split brain scenario occurs.

Both hosts remain functional and have a fully intact set of data on their storage. Both hosts can handle I/O from initiators in their (split) region. Both datastores receive writes that cannot be mirrored to the opposite site. Those changes cannot be synced if the mirror comes up again. Continue reading “New Datacore Witness against Split-Brain scenario”

Veeam ReFS Repository on iSCSI Targets

Troubleshooting Repository Deadlocks

With Resilient Filesystem (ReFS) integration into Veeam Availability Suite 9.5 a whole bunch of features was integrated. One of the biggest advantages is ‘Fast Cloning Technology’ which enables synthetic full backups by merely creating pointers to already existing datablocks on the repository.

In a small scale environment I had a hardware repository server (Win 2016) with an iSCSI Volume as repository (ReFS, 64k) as primary backup target. This constellation worked like a Swiss watch. Daily backups ran for months without any trouble. Fast cloning technology enabled weekly synthetic full backups with minimal consumption of extra space.

Recently I’ve added another iSCSI Volume (ReFS, 64k) to be used as repository for backup copies. That’s when the fun began… Continue reading “Veeam ReFS Repository on iSCSI Targets”

Change Brocade FOS default passwords

Brocade FC-Switches are equipped with four default useraccounts: admin, root, user and factory.

By connectiong an SSH session with user ‘root’ and default password ‘fibranne’ you will be prompted to change logins for accounts root, user and factory.

 

This happens at login as long as you did not change default passwords. The process can be started by pressing <ENTER>. If it is skipped by pressing Ctrl-C you will be prompted again at next login of user ‘root’

Show users

userconfig --show -a

This command will show a list aof all local users and their settings.

 

vMotion fails at 21% with error 195887371

How to troubleshoot vMotion issues

Troubleshooting vMotion issues is in most cases a matter of networking issues. I will demonstrate in this case how to trace down the problem and how to find possible culprits.

What’s the problem?

Initiating a host vMotion between esx1 and esx2 passes all pre-checks, but then fails at 21% progress.

Migrate virtual machine:Failed waiting for data. Error 195887371. The ESX hosts failed to connect over the VMotion network.

See the error stack for details on the cause of this problem.
Time: 07.01.2018 19:08:08
Target: WSUS
vCenter Server: vc
Error Stack
Migration [167797862:1515348488969364] failed to connect to remote host <192.168.45.246> from host <10.0.100.102>: Timeout.
vMotion migration [167797862:1515348488969364] vMotion migration [167797862:1515348488969364] stream thread failed to connect to the remote host <192.168.45.246>: The ESX hosts failed to connect over the VMotion network
The vMotion migrations failed because the ESX hosts were not able to connect over the vMotion network. Check the vMotion network settings and physical network configuration. 
Migration [167797862:1515348488969364] failed to connect to remote host <10.0.100.102> from host <192.168.45.246>: Timeout.
vMotion migration [167797862:1515348488969364] failed to create a connection with remote host <10.0.100.102>: The ESX hosts failed to connect over the VMotion network
Failed waiting for data. Error 195887371. The ESX hosts failed to connect over the VMotion network.

Continue reading “vMotion fails at 21% with error 195887371”