Virtualisation – What took you so long?
With the amount of attention devoted to virtual computing (or virtualisation, as it is popularly known) you can't be blamed for thinking it must be the biggest thing in business computing since, well, the computer.
Let me tell you that it's not, but that doesn't mean you should stop listening. Liberate yourselves from server hardware slavery First developed by Burroughs and then IBM in the 1960s, virtualisation originally concerned big computers (mainframes), splitting them into many virtual computers (virtual machines, or VMs, as they're known today), each of which ran their own operating system and applications.
Virtualisation software brings the same idea to server hardware, precisely matching software requirements with computing power. The concept was first developed by Amdahl and Hitachi in the 1980s and then brought to the mass market by VMware (to run on Windows servers) in the 1990s. Until VMware showed up, Windows computers (especially servers) grew like mushrooms, because it was easier to buy a new server than have applications share a server. After all, servers were only getting cheaper and reliability seemed to falter when applications cohabited a single server.
But with so much random growth it was becoming very complicated to pool and share computing capacity, and organisations became wasteful, even if they didn't want to be.
Little wonder computer rooms got as messy as they did and IT managers looked so haggard.
Virtual computing. Old hat, but keep listening
Today, instead of simply running one system on a computer (like Windows on a PC) virtualisation spreads software over many servers, drawing processing power from any number of them, or running multiple copies of software on a single server. The approach has solved the industry's problem of computer hardware wastefulness and underlying availability and reliability risks.
Whereas 10 applications would use about 10% of 10 physical servers, virtualisation allows all 10 applications to reside on a single fully-utilised physical server, improving reliability and availability.
That single virtual machine can be split over a number of physical servers alongside other virtual machines, removing a single point of risk (one server failing). In contemplating virtual servers, simply think of them as real computers, though with capacity and performance unrestrained by the size of the hardware on which the server operates.
This is a good thing. In fact, if you're not buying virtual servers you should be asking why not. So keep listening, and even if virtualisation is in fact old hat, understand that it really does work, cuts costs and helps IT deliver more by using less. It also helps IT go faster – as servers exist as software (rather than as a dedicated piece of hardware).
So, getting new ones up and running is much easier and faster – mouse clicks rather than a man in a van – and they can be moved around on the fly and turned up and down as required, saving power.
Cures for premature virtualisation Virtualisation is not created equal: Virtualisation goes well beyond servers. The first big question IT managers must ask themselves is: Can we sustain a power outage?
Quick hint: The presence of a generator won't protect you from a meltdown.
Over-subscription: The big issue for buyers of virtualised utility capacity
Some providers will over-subscribe their virtual infrastructure capacity. Providers must guarantee the absolute minimum level of on-demand virtual machine units available. Redundancy: Some utility providers, and even the odd IT manager, don't have the budget for adequate redundancy. Buyers must look underneath the bonnet and understand that full automatic failover is only possible when underlying infrastructure comes with N+1 end-to-end redundancy.
Model behaviour
Modelling the potential impacts of newly virtualised physical IT in the new environment is the best way to correctly calculate the size of necessary server capacity and avoid collateral damage. It's about planning and design. Disk limitations: Disk performance is a major determinant of virtual performance.
Some virtualised environments connect blade centres to a single modular disk array, which doesn't perform anywhere near as well as enterprise storage. On the other hand, virtualised tier one enterprise storage environments move data to faster or slower disks, shifting workloads to ensure virtual servers perform optimally. Stop thinking about the hardware: Virtual severs don't mean a piece of hardware anymore, but people don't seem to have stopped thinking about servers as hardware.
Servers are now just a logical piece of processing power. They can be any size and shape – small CPU, but tonnes of memory. You can build your own. It does change the game, but it also introduces new opportunities for error. Audit trail: Perhaps ironically, virtual IT is now more dependent on physical IT.
The physical pieces underpinning delivery must be guaranteed fit for purpose. In our case, we ensure vendors are certified and tested in our environments.
Are you experienced?
Anyone buying utility infrastructure, which these days invariably flows from virtualised IT, needs to enquire about the provider's pedigree in multi-tenanted enterprise infrastructure. Running multiple customers in one environment is a completely different ballgame to running the infrastructure for just one organisation.
Just because a company delivers IT services to an IT department, that doesn't mean they're qualified to run a hosted virtualisation offering.