VMware will be sunsetting the NSX native load balancers. Customers should be migrating to the currently supported NSX Advanced Load Balancer (Avi) which simplifies operations today while getting you ready for your multi-cloud and container strategies tomorrow. Avi works across all environments beyond the NSX framework, expanding use cases to public cloud, containers and app security while adding capabilities for GSLB, WAF and analytics. A migration tool will be available to make the migration of your existing configuration to the current technology easy and painless.
I already discussed the initial version of this plugin in https://www.elasticsky.de/en/2020/06/veeam-storage-plugin-for-datacore-deepdive/.
The cosmetical “1970” bug mentioned in the blog post above has already been fixed in an interims release. With V1.2.0 now we get full CDP support. CDP in this context does not relate to Veeam’s functionality of the same name. DataCore maintains a feature with this acronym for at least 10 years already.
I also explained a workaround to leverage CDP rollback points with the old version of the plugin already. We will not need the workaround any more, as the plugin now detects CDP rollback points just like it detects snapshots on your SanSymphony volumes!
The first installation of the plugin is pretty straightforward and was also discussed already. To update your installation, the new version of the plugin can be installed on top of the old one. Just disable all jobs beforehand and wait for VBR to become idle. The installer will replace the plugin files within the path.
C:\Program Files\Veeam\Backup and Replication\Plugins\Storage\DataCore Software Corporation
Once installed and configured, VBR will detect all CDP rollback points you create from the DataCore console right away and will let you do all recoveries as with common snapshots. The difference is, that you do not need to have any snapshot schedules. Just enable CDP for your volumes. Only when necessary, create your rollback to the exact moment in time needed. Could e.g. be just a few seconds BEFORE the ransomware started to encrypt your fileserver. This allows to lower your RPO for all VMs to a few seconds.
In contrast to snapshots you are currently not able to generate rollback points from within your Veeam console. You have to jump to DataCores console. This is because some extra decisions have to be made to generate a rollback point:
- Exact point in time to spawn the rollback point to
- Type of the rollback point: either “Expire Rollback” or “Persistant Rollback”
The amount of time you can rewind depends on your license within DataCore on one hand and the size of the history buffer you reserved on the other. I would strive for at least 8h here, to allow for rolling back a regular working day. But more is even better of course. For a 24h buffer you would have to reserve your daily change rate as a history buffer at least. So have some extra disk space ready.
An “Expire Rollback” will automatically be disposed, once the rollback point in time moves out of this history buffer. This could of course be dangerous in a recovery scenario, as you would all of a sudden loose the valuable restore point. Maybe right in the middle of a recovery. This is why in the default settings only a “Persistant Rollback” will be detected by Veeam. But this can be changed of course. Read about the details in this whitepaper.
I would though recommend to stick with only detecting “Persistant Rollbacks”. Those rollbacks should preferably only be used with mirrored volumes. Here a rollback will still be secured once it reaches the end of the history buffer. Now the productive volume on the side of the history buffer will be disconnected. With a mirrored volume this will result in a volume running from only one side of your cluster. But your VMs will be available and so will the rollback point.
One should plan for CDP accordingly. Have an independent disk pool for your history buffer to minimize performance penalties. This buffer should offer the same performance rate as the productive pool does. I would recommend 32MB as a SAU (Storage Allocation Unit) size for the buffer pool. For the productive pool I usually stick to 128MB, though 1024MB is the default now. This enhances granularity for AST
This year I applied for the VMware vExpert Pro program for the first time and was delighted to receive the news on Monday that I had been accepted.
What is vExpert Pro?
The idea behind the launch of the vExpert Pro program is to create a worldwide network of vExperts who are willing to find, support, and mentor new vExperts in their local communities.
VMware launched the program 2018 and describes vExpert Pro as cited below.
A vExpert Pro is a current vExpert who excels in their local region, adding value to the program and giving back to the community. This person has a strong relationship with the local IT community in general, and works as an advocate for the vExpert program, recruiting, mentoring and training people.
What does vExpert Pro mean for me?
I see it as an honor and recognition for the work I have been doing in and for the community over the last several years.
There is a large number of unknown experts around the world with a high level of knowledge and a willingness to share this expertise with others. They often lack just a little push to apply for the vExpert program. Many don’t consider themselves good enough or worthy of becoming part of the vExpert program. This is where the vExpert Pro will come into play. It is their mission as mentors to assist new experts in finding their way into the community.
I’ve been actively blogging since 2010, and for a long time I too considered my own content to be insignificant or not good enough. So it finally wasn’t until 2017 that I applied to become a vExpert for the first time. Back then, I would have appreciated a mentor like a vExpert Pro. This would have certainly helped me get to the vExpert program with more confidence and also much sooner. I consider this to be my primary mission as a vExpert Pro.
I have been actively mentoring in the VMUG Mentorship Program for some time now and have been coaching two candidates (mentees) from Indonesia and Poland. Here the focus is on personal development, training and improvement of communication skills such as public speaking. The vExpert Pro is the logical next step in this activity. I would like to guide talents in my region on the path to the vExpert and support them in every way possible.
Get in touch
Have you ever thought about joining the vExpert program? Did you abandon the idea because you lacked the courage or motivation? Then don’t hesitate to get in touch with me.
Lab environments are a great thing. We can test new products on a small scale platform and demonstrate them as a proof of concept (PoC).
Like many of my fellow bloggers I write down my lab experience in little blog posts that I share with the community. I regularly read blogs and tutorials to keep myself informed about new products and techniques. There is hardly a topic in the field of virtualization that someone hasn’t written something about at some point. This is invaluable, as it gives you a quick introduction to what is usually a complex subject.
When reading my (and other) blog posts, you may get the impression that the described setup procedure follows the simple skip-skip-finish principle. In other words, accept the default values, click three times and the installation is complete. This might be true in the lab, but a real life deployment is miles away from a lab setup.
In the lab many things are simplified to the max according to the KISS principle (keep it simple and stupid). Some of the methods used are not necessarily in compliance with the manufacturer’s recommendations, or are outright forbidden in productive environments.
This means : Having read a tutorial by my favorite blogger [insert name here] does not enable me to transfer what I have learned 1:1 to a real project.
I have had several discussions about this in preliminary project meetings. People have asked why the planning phase takes so much time. They said that (they thought) the product was totally easy to install, as you can read on [insert name here]’s blog.
As a blogger and lab user, I know how to view these posts. They are to be understood as a quick introduction and an easy to understand overview of a new technology. This has very little to do with real world deployments. In this posting, I would like to point this out with the help of a few examples:Continue reading “Don’t confuse a blog post with a deployment guide”