IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Virtualisation – how far we have come
Sun, 1st Apr 2012
FYI, this story is more than a year old

Storage virtualisation technology has been around for about the same time as its server counterpart, which is now around eight years. The reason storage virtualisation hasn't enjoyed the same attention and popularity as servers is that in the mid 2000's, IT organisations were far more interested in the computer and CPU wars between Intel and traditional UNIX vendors than ‘boring old storage'. Fast forward to today and IT has grown up from a being a ratty teenager into a responsible adult of whom business expectations are very different. IT organisations are being held far more accountable today than ever before - they need to make decisions based on value instead of the ‘coolness factor', adapting to changing business conditions faster instead of ‘sleeping in late' and demonstrate they are delivering savings back to the business, rather than ‘taking money out of its parents wallets'. This, coupled with the fact that businesses are creating new data faster than we are processing it (1.8 Zetabytes in 2011 alone according to IDC), you can begin to understand why storage has now taken centre stage. The three primary ways of virtualising storage are either though hosts (file systems), in the Storage Area Network (appliances) or within the controller (storage array). Although each has its pros and cons, the predominant implementation has been storage array due to the ease of implementation, common management and least complexity to manage. Hitachi Data Systems introduced external storage virtualisation back in 2004 with its Universal Storage Platform (USP) which received a number of awards for its innovation and easy deployment. Now, two generations on from the USP, the current generation Virtual Storage Platform (VSP) released in September 2010 extends advanced thin provisioning, auto-tiering and replication features to any vendor's storage that connects to it. Today, Hitachi Data Systems has over 20,000 customers worldwide with virtualisation capabilities. Here are four myths about storage virtualisation and how the maturing of the technology has dispelled them: Storage virtualisation is complex and generally difficult to implement Many early applications of storage virtualisation were appliance based (i.e. Falconstor, IBM SVC, EMC Invista) which involved deploying dedicated devices in between the SAN. Deploying these solutions were not for the fainthearted and involved a fair degree of planning and project management, required outages to install, new skills to manage and greater effort to maintain once operational. Back in 2004, we took the revolutionary approach of embedding virtualisation capabilities directly into the storage controller, which could be rolled out as part of a technology refresh and resulted in less complexity to deploy due to the fact there are less ‘moving parts' in the infrastructure and it's easier to manage as virtualisation functions such as discovery of multi-vendor storage and migrations are accomplished directly within the standard management tools as opposed to deploying new ones. If I have virtualisation in the server, I don't need it in my storage Virtualisation capabilities are best implemented where there is intelligence, much like a server which needs to co-ordinate and manage CPU, memory, I/O resources, today's modern Storage Area Networks need to do much of the same. Storage arrays do far more than ship data from servers to disks, they are managing local and long distance replicas, prioritising workloads, moving data between different tiers of media and some even intelligently manage data by de-duplicating, compressing or encrypting it. Implementing virtualisation in the storage layer is actually complementary to server virtualisation. By offloading jobs like provisioning, moving and replicating data, they are free to do what they were designed to do, which is run more applications. Furthermore, by implementing closest to the data, the many benefits of virtualisation can be made available to any host that connects to it (regardless of class or operating system), delivering a consistent and repeatable set of capabilities. We integrated is storage virtualisation capabilities with hypervisor technology such as VMware's VAAI framework and Microsoft's SCOM VMM to optimise management tasks and make day-to-day management seamless and simple. Storage virtualisation is expensive Between all the hardware, software and professional services required to get storage virtualisation up and running, some organisations have questioned whether the costs are worth it. Like all things in life, some hard work up front will pay off in time. Our approach to virtualisation was not only to make it easy to introduce into the data center, but also to reduce both the cost and risk over time. The fact remains, that most organisations' storage utilisation rates are between 30-40% when you take into account RAID overhead, stranded storage and replicas. By pooling storage resources together there is an immediate improvement from sharing capacity more effectively. Then, by using things like thin provisioning and dynamic tiering, utilisation can be dramatically improved by allocating the capacity only when it is absolutely required. Acting together, storage utilisation can rise to 80-90% thereby reclaiming a lot of capacity that would otherwise have to have been purchased. Often, implementing storage virtualisation pays for itself within the first year, not to mention the forward cost benefits as storage grows. Storage virtualisation = vendor lock in Another fear people have with implementing a particular vendors virtualisation technology is being able to move away from it in the future if they no longer want it. Some solutions on the market, particularly appliance offerings intercept and store data in proprietary ways making it difficult if not impossible to remove them after they are operational. For IT and business this can become a substantial problem when introducing new technology from other vendors and trying to get support. We chose to implement virtualisation in an open and non-proprietary way by using industry standard protocols (i.e. SCSI) and avoiding the need to maintain internal mapping tables. This allows for the removal of the storage virtualisation and the connection of underlying storage directly to hosts, maintaining complete data coherence and integrity. In fact, Hitachi's VSP can be used as a migration device with no internal storage to facilitate technology refresh, once migration has been completed it can be pulled out with no need for it to be maintained.