Virtualisation, which encompasses a number of options that include server consolidation, storage optimisation, application and network virtualisation, and desktop provisioning, is by all accounts here to stay. Most organisations have embraced some form of virtualisation, even if it is only used in test environments.
But does virtualisation work? In his incisive assessment of the challenges currently facing organisations as they adopt and manage virtual environments, Clive Longbottom of Quocirca makes this memorable assertion: "One of the problems with virtualisation is that it works.” But what is making virtualisation work, and what specific challenges do organisations face as they make the move to virtual environments? Here we parse out Longbottom’s statement to find out how virtualisation, when complemented with a clear operational strategy, can be a game changer on the path to cloud computing
The Challenge of Laxity
Longbottom’s statement that "one of the problems with virtualisation is that it works” may sound strange, but it refers simply to the lax, short-term view that many organisations currently maintain towards virtualisation. In the rush to reduce costs, enable rapid deployment, and improve system availability, many organisations have failed to define a clear operational strategy designed to provide sustainable, long-term cost savings and operational efficiencies. Organisations must overcome complacency and, as Longbottom writes, become aware that "the path is not completely smooth, and that without the right tools and business planning, the move from chaotic collection of physical servers can result in a chaotic virtualised platform
Next, we examine a few of the specific issues and challenges that organisations need to watch for in order to design and optimise their virtualised environments.
Licensing, utilisation rates, and image management all require special attention in a virtual environment. Virtualisation allows images to be created once and used many times. Each image will have its own operating system, and possibly its own application server and various applications. But without the right tools, the repeated provisioning of images without managed de-provisioning after use can lead to licensing woes.
Another issue that requires active measuring involves asset utilisation rates. One application per physical server can leave utilisation hovering around 10% or less, but virtualisation offers a means of running several applications per physical server. Utilisation rates can and should be measured using monitoring tools that show CPU, storage and network loads. Even if utilisation rates reach about 50-60%, it’s important to monitor the system for images that aren’t doing anything except ticking over. These are wasted resources that could possibly be used for other images, or to provide resources for peak transactional loads.
Active image management is also a central part of optimising a virtual environment. Let’s say a single image only occupies 2% of a CPU core. If you have 100 of these images live but not being used, that amounts to two CPUs’ worth that require electricity, managing, cooling, and which have several licenses applied to them.
Workload Management and the Mainframe
A strong virtualisation strategy also requires close attention to workload management. Organisations need to pay close attention to the specific processes they need to carry out workloads, and plan for the specific computer capabilities that these will require. A retail bank processing thousands of ATM transactions, for instance, will have different workload processes than a film production company working on a 3D film. When considering workload management, it is important to remember that each workload is best served by a specific computer configuration and architecture. In these assessments, organisations should not prematurely discount today’s mainframe computers, which often play a central part in today’s virtual environment.