Most people know what I’m talking about when I say “virtual test machines”, but I’m sure some of you do not. Or aren’t 100 percent sure anyway. In any case, it is simply the practice of setting up virtual machines (aka “guest” computers) to support testing of various things. In my particular case, it involves software packaging and deployment testing, but I’ve used this for modeling Active Directory migrations, SMS and ConfigMgr deployments, Group Policy modeling and management, scripting, and just plain old diabolical fun.
Most anyone working in IT that uses VMware, VirtualPC or Virtual Box, or whatever, is well aware of the good things it provides. It makes life easier. But there are always caveates. Nothing in IT is a panacea of pure goodness. There is always something to be careful to avoid or prepare for. Virtualization is no different.
I will say that there are more good things (benefits) than bad things (drawbacks, disadvantages), so in the end: proceed. All I’m doing is shining a flashlight on a few creeky boards you may step on eventually, so keep your eyes open.
- Protected isolation
- Faster provisioning
- Faster roll-back via Snapshots
- Multiple test platforms via Cloning
- Less hardware, reduced footprint/power/heat
Most of these are self-explanatory. Virtualized computers provide a protected bubble in which to do things that would otherwise cause changes that are difficult (or impossible) to recover from. A mistake with a Group Policy in a production environment can be disasterous and ugly. The same mistake in a virtual environment is much more easily undone. The usual advantages cited for data center virtualization apply equally to desktop virtualization.
I will say that I give a slight tip of the hat to VMware Workstation for “snapshots” over VirtualPC. I don’t recal VirtualBox providing this feature either, but it’s a HUGE benefit to virtual testing. Not just for being able to “roll back” (aka “revert”) but also for easily building state trees which allow you to fork one virtual machine into many in short time, each having a unique delta over the previous. For example: a baseline machine, then snapshots for adding various applications, service packs, and combinations, maybe six or seven machine states you can jump to at any time.
But as I said: there are caveates.
- Domain synchronization
- Tools / Additions updates
- Hardware Interfaces
- Isolate “test” from “production” for availability/management
Let me explain these a bit more:
Domain synchronization comes into play if you join your virtual machines to a functional Active Directory domain. Each computer in a domain environment synchronizes its own password with the AD on a regular schedule. If you roll a virtual machine back to a state that preceeds the most recent synchronization, guess what happens? It’s like trying to logon with your older password. BZZZT! Fail! The computer will display messages at logon stating the the domain controller could not be found and so on. Simply rejoin the virtual machine to the domain to reset it, then re-snapshot it and delete the older snapshot (so you don’t do it again). You will need to do this about once per month typically.
Tools (aka “Additions”) updates. These are special installations that come with the virtual machine software which are installed into the virtual “guest” computers. These packages provide enhanced interface between the virtual machine and the physical host. Usually they update the video and network drivers, but may also add things like drag-n-drop or clipboard support, USB plug-n-play support and so on. VMware updates these fairly often (a few times a year). If you update VMware Workstation from 7.0 to 7.1 or 7.1.2 (etc.) it will also prompt you to update the tools installations in each virtual machine. The more virtual machines you have, the more work you have waiting for you. And when you use snapshot trees you will need to open each captured state in the tree and update the tools as well. A bit of a pain, but it scales with the number of virtual guests you use.
Hardware interfaces. Some software requires connecting to a hardware device which may or may not behave well with a virtual machine. Some older (“legacy”) parallel and serial port devices, or certain USB devices, may not auto-detect between the physical port and the software running in a virtual machine. In some cases, you will still need to maintain a physical test computer for such purposes. This is not at all uncommon. If you support hundreds of applications, you won’t typically be able to go completely virtual with your testing. But as Confucius said: “any virtual is better than none”. Ok, he might not have said that.
Isolate “test” from “production”. If you plan on virtualizing desktop computers for testing, that’s one thing. If you’re going to do that with production computers, that’s different. I won’t go into all the issues, because the list is forever long and I’m still burning my candle at both ends on a big project, but thinks of these aspects to your production environment:
- Anti-virus and anti-malware centralized management (when VM’s rollback state constantly)
- Active Directory and Group Policy settings (same)
- Running push scans from a server (when VM’s are offline)
- Deploying service packs and application updates (and VM’s rollback constantly or are offline)
It can be messy to manage. It’s usually best for controlled testing. I’m not talking about VDI implementations or MED-V or anything like that. This is with desktop-oriented, virtualization of computers on top of physical computers.
I’m not at all trying to discourage anyone from virtualizing computers, especially for testing purposes. I’m just saying that it’s not something to walk into blindly. Plan ahead and it will be a smooth journey.