Storage Magazine - UK
  VIRTUAL DREAMS CAN COME TRUE

VIRTUAL DREAMS CAN COME TRUE

From STORAGE Magazine Vol 6, Issue 6 - July/August 2006
 

What constitutes a well executed virtualisation engine? And which are the key factors involved in ensuring the chosen solution will deliver all of the gains it promises to provide - and more? Editor Brian Wall reports

While most people have heard of virtualisation, many are not aware of what it actually means and how it is applied across the business environment. So a brief introduction to the technology and its workings seems a good starting point.

'Virtualisation' is basically an approach to IT that pools and shares resources, so that utilisation is optimised and supply automatically meets demand. One of the key characteristics of an adaptive enterprise is the ability to sense change in business demand and automatically deploy resources to meet those demands. Virtualisation, when implemented in the right way, makes this new management capability possible.

As a virtual infrastructure pools and shares the servers, storage, networking and other devices that comprise an infrastructure, resources are allocated across applications and processes to automatically meet the changing demand of the business. Crucially, storage virtualisation is a means of shielding applications from underlying physical storage systems and thus provides greater utilisation of disk arrays by creating a single data-storage pool out of heterogeneous systems.

Is it worth it?
The first question an adopter of storage virtualisation should ask is, is it worth it? And does the effort in deploying the solution mitigate the current pain IT is suffering? Assuming it is worth it, then there are four fundamental challenges that must be addressed by vendors and considered by potential adopters, states Dave Gingell, vice president of marketing, EMEA, EMC Software Group. These are scalability, functionality, management and support.

"In today's existing SAN environment, performance is distributed across multiple storage arrays," he says. "Each array is independent of every other array. In a virtualised environment, storage performance is aggregated from across the infrastructure. It is this ability to aggregate that underpins the management simplification benefit of a virtualised environment. Therefore, much of storage virtualisation's value comes from its ability to scale and maximum value is achieved when the whole target environment can be aggregated into a single logical view.

"Today, applications storing data on the SAN have access to rich array-based software functionality, such as local and remote replication. By aggregating and abstracting the storage capacity, virtualisation solutions mask the individual devices, breaking the host-to-device relationship that the array-based software needs to function.

Thus, in order not to subtract value and deliver a less functional environment, the virtualisation solution must either replace the value-added functionality provided by the arrays or interoperate with and preserve this existing functionality. An ideal solution will not offer an either/or proposition, but provide both options. "A key advantage of today's storage resource management tools is that they provide an end-to-end view that integrates everything in the environment. Virtualisation devices affect SRM or any other "end-to-end-view" management tool. Introduction of a virtualisation device breaks the end-to-end view into three distinct domains: the server to the virtualisation device; the virtualisation device to the physical storage; and the virtualisation device itself. The re-integration of the management view is essential to achieve the manageability benefits of a virtualised environment."

No compromise
As Gingell emphasises, virtualisation is not a stand-alone technology. The virtualisation device is a new platform with new intelligence and it has to interact with everything you already have, including servers and server-side software, storage networks, networking hardware and network protocols, and storage arrays and array-resident software. "Interoperability and support will be key to the success of any virtualisation solution."

According to Taufik Ma, marketing VP, Emulex, network-based storage virtualisation is increasingly and rapidly gaining market acceptance, because of the numerous benefits the technology brings to end users.

"A well executed virtualisation solution must be invisible to its SAN environment, from a performance, latency and scalability perspective," he states. "In other words, it's unacceptable to compromise storage performance in the pursuit of leveraging virtualisation benefits. Under the covers, a virtualisation engine should include a next-generation, intelligent storage processor that is unique in its ability to provide low latency, high performance and scalability within virtualised storage environments.

"Virtualisation appliances include the management application on the same system as the virtualisation engine," he states. "Virtualisation switches include the virtualisation engine, separate from the management application and which resides on its own system. As many companies have traditionally relied on software-based virtualisation engines, performance and scalability concerns have emerged within virtualisation appliances.

“On the other hand, virtualisation switch-based solutions have been designed from the ground up to keep the management application and virtualisation engine separate, offering the scalability benefits of deploying a single management application within multiple virtualisation engines."

Ma believes the use of next-generation, intelligent storage processors is a key factor for a state-of-the-art virtualisation engine, as they provide "performance invisibility" for both switch-based and appliance-based solutions (where they replace software-based engines). "In doing so, the performance and scalability trade-offs between switches and appliances are eliminated," he says.

Maturing solution
Virtualisation has, of course, been around for many years in one guise or another, as Dave Logan, consulting systems engineer, NetApp, is quick to point out.
"Think of the first RAID systems, where many separate physical drives are presented as a logical abstraction to the client, so it looks like the multiple physical drives are one single drive. Today's market is much more mature, providing virtualisation at higher levels in the stack. There are also a number of compelling solutions and several arguments, including in-band versus out of band, both with their merits, and the market has not yet decided which is best."

Some of the benefits of a well-constituted virtualisation engine he identifies include a reduction in cost by simplification of the management of disparate heterogeneous systems, increased asset utilisation (true storage utilisation is typically 30%) and a dramatic reduction in storage provisioning times - from weeks to hours and minutes.

"We have seen one customer with a solution that can provision virtualised storage and server within seconds of their client paying for a service with a credit card," he relates.

The other top five benefits of a well-constituted virtualisation engine Logan highlights are:

• The ability to move transparently between storage devices as needs change - eg, Hierarchical Storage Management
• Increased performance and storage resilience
• The means to tier storage invisibly to end users
• A utility computing model, where use meets needs
• Over-provisioning - eg, if everyone asks for money from the bank at once, it will run out, but this happens very rarely. Similarly, most clients request storage for a two- to three-year period. They are not expecting to use it all upfront

"A good virtualisation engine must provide all the above features, without reducing performance or reliability or increasing complexity. The solution also needs to provide common data management and protection without complicating DR and snapshots," he adds.

Demanding environment
In today's environment, IT must deliver on increasingly demanding service levels. These demands come in two forms, says Ian Bond, consulting systems architect, Cisco Systems UK&I. "The first is for increased data availability. 'High availability', in the usual sense, refers to a system that is able to circumvent unplanned outages. However, as organisations increasingly want 24-hour operations capability, the ability to eliminate unplanned downtime is not enough. Planned downtime, resulting from operational requirements such as data centre moves or platform migrations, must also be addressed. Eliminating both types of downtime will satisfy the new, required standard of 'continuous data availability'.

The second area of demand he pinpoints is for improved delivery on the information requirements of the business. "Storage needs to deliver the right information to the right place at the right time. Storage resources across an organisation need to be effectively allocated and dynamically reallocated, based on business policy and the value of the data to the business at any point in time."

Bond sees storage virtualisation as addressing both challenges: simplifying infrastructures, enabling non-disruptive operations and facilitating critical elements of a proactive Information Lifecycle Management (ILM) strategy. He identifies some of the key characteristics and intelligence that are essential in a well designed storage virtualisation product as:

• An open architecture that supports multi-vendor storage, host and network devices

• Scalability - the virtualised environment would ideally scale from entry level through to the largest requirements for storage capacity

• Security - for example technologies such as Secure Shell (SSH) Protocol, RADIUS, Simple Network Management Protocol Version 3 (SNMPv3), and role based access control

• Traffic management - maximising the efficiency and prioritisation of control and different tiers of storage traffic

• Diagnostics - including the ability to pre-empt outages or reduced service

• And high availability - the virtualisation engine itself needs to be designed as to support high-availability production environments.

Despite analysts predicting an increase in IT spending, squeezed operating margins are continuing to force IT directors to look at new ways to reduce costs, yet maintain service levels. In addition to needing to do much more with a good deal less, IT managers are also facing the challenge of standardising systems, reducing the management burden and dealing with upgrades, alongside compliance issues brought to the fore by current security and legislation demands.

"Part of the answer to this over-arching problem is to look at a virtualised solution," says Hugh Jenkins, enterprise marketing manager for Dell UK. "Many servers and storage devices do not currently realise their full capacity and are often under-utilised and therefore not cost effective.

"Storage virtualisation creates one interface to virtual machines, allowing a storage administrator to perform the critical tasks of backup, archiving and recovery more easily, and in less time, thus making better use of resources.

"SAN infrastructures have been around since the late 90s and, as such, many organisations are now finding their SAN environments are distributed across multiple sites as their business has evolved - either through data growth or corporate acquisition. As a result, these distributed SANs have become complex to manage and utilisation levels are often lower than expected. Linking all these SANs into a single virtual SAN will help to drive up operational efficiency and improve overall service an organisation."

However, before embarking on a virtualisation project, Jenkins suggests some key considerations that should be made to ensure a robust solution is created:

• Security and compliance - consider how your companies compliance obligations impact on the availability and security of data and ensure your virtualised solution reflects that

• Cost - what are your budget constraints? This may limit the overall scale of your project. However, do not lose sight of the reduction in total cost of ownership

• Scalability - how fast is your business growing and what are your capacity requirements? A virtualised solution offers scalability with ease, but planning in advance is advisable - how often should you review your requirements? Don't just develop the solution and move on to the next project. Make certain it's reviewed regularly to ensure it continually meets the needs of your business

• Training - although a more manageable system, who will manage this virtualised system and will you need to train your IT staff?

"By taking these factors into consideration, you will be able to build a bespoke virtualised system that is manageable, scalable and reliable," adds Jenkins.

Integrated solution
Virtualisation engines do not live in isolation, of course. As Guy Bunker, chief scientist at Symantec, says, they should be planned alongside other IT strategies, such as business continuity, disaster recovery and general availability procedures, so that IT managers (and their superiors) can holistically integrate virtualisation into their IT.

"However, whilst virtualisation is very useful, it is not the answer to all IT problems. The scalability of virtualised server networks is a good example - while it may seem that there are lesser administrative burdens due to the fewer servers, it may well still be the case that each virtual machine has to be patched separately. By ensuring that the limitations, as well as the benefits, of the strategy and implementation being used are known, focus can be easier directed toward maximising advantages, rather than mitigating problems.

"Technology moves at an amazing pace - a virtualisation solution available today may be replaced by another tomorrow. One simple example is high-availability, which has gone from 'hot standby' with a dedicated second machine on to which the applications can failover (application virtualisation) to clustering, where there may be tens of machines all covering for the others. So, if one goes down, the applications can be shared among the remaining nodes to 'the grid', which may have thousands of nodes to which the application might move.

"This agility of the application to 'right size' itself by being moved to an appropriate server is just one piece of the virtualisation puzzle; the data, or rather the storage, also has to move (either logically or physically), so the application can access it.

"Add to the picture, the addition of virtual machines which cannot provide another aspect of resource partitioning and you begin to see the flexibility and perhaps more importantly the complexity of the new environments that are emerging. Solutions need to be assessed by their ability to incorporate new technologies seamlessly."

The DR connection
Apart from the enhanced management functions it clearly offers, virtualisation is also often an integral part of disaster recovery (DR) planning, due to the mirroring and replication functions that are often part of the product functionality. Virtualisation can also greatly ease technology upgrades and data migration by simplifying the creation of data copies.

By utilising virtualisation, storage can be consolidated into a single global pool, releasing that captive capacity to general use. This can make a dramatic difference, allowing for the full use of an array down to the last megabyte.

With this better capacity utilisation, it is also possible to take advantage of automated capacity expansion - monitoring capacity use and automatically pulling in storage as it is consumed and needed. This can help to reduce the rate of capacity usage and even be used to allocate more storage than there is actual physical space. For applications where much of the data is actually zero-filled space, this can bring about large savings in storage capacity required, delaying the need for buying that next storage array.

Storage can also be added and introduced into the system without the requirement to take down all of the storage on the network to reconfigure it, greatly improving the uptime and availability of applications running on an organisation's data storage, while helping to ease the pain associated with capacity expansion.

Overall, storage virtualisation tools - which enable pooling of disks, RAID and storage arrays into a single pool - empower administrators to carve out storage for their clients more easily, enable companies to optimise capacity usage more effectively and help to reduce the cost of data storage.

At a time when data centre administrators are being forced to manage more and more storage, but where budgets are flat, virtualisation can prove a powerful ally. ST

The products referenced in this site are provided by parties other than BTC. BTC makes no representations regarding either the products or any information about the products. Any questions, complaints, or claims regarding the products must be directed to the appropriate manufacturer or vendor. Click here for usage terms and conditions.

©2006 Business and Technical Communications Ltd. All rights reserved.
No part of this site may be reproduced without written permission of the owners.
For Technical problems with this site contact the Webmaster