IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Unrecoverable: Backup's inconvenient truth
Mon, 1st Aug 2011
FYI, this story is more than a year old

Organisations pay a premium on disaster recovery, hoping to protect themselves from the worst effects of any calamity. But too often there is no guarantee that their insurance policy will pay out.

One particularly vulnerable area is data backup and recovery. This may come as a surprise to some, especially as so many organisations have invested millions in data backup and recovery technology.

The truth is that although businesses backup all the data they need to, at the end of the day they have no idea whether these backups will recover in the timeframes needed or at all.

Ensuring that every single byte of data can be safely recovered in the wake of a disaster has been so expensive, in terms of money, man-hours and infrastructure that most organisations shy away from addressing the issue. When the worst happens, though, the effectiveness of any recovery is essentially left to chance. In this event, organisations are playing Russian roulette with their data.

Crucial decisions

Data backup has always been a laborious process that is undertaken because it's far preferable to losing valuable information for ever. Increasingly, organisations are also under pressure to backup data due to compliance regulations such as Sarbanes-Oxley and HIPAA: failure to prove that data is safe and recoverable can hit an organisation hard.At its simplest, backup involves saving data on a secondary storage device.

Yet, for a business, backup requires much more than this. The business must invest in the infrastructure and tools necessary to securely store and accurately backup and restore data.It has to make sure it has the time, money and personnel to backup (and recover) data on a regular basis so that, in the event of a disaster, huge amounts of crucial information are not irretrievably lost. And it often has to make the hard choice of deciding what, exactly, constitutes business-critical data as saving everything can be a costly exercise in terms of the additional hardware and physical space needed.

For the business, backup is anything but simple: it is a time-consuming, complicated, expensive process.

Testing is paramount

Recently, virtualisation has helped organisations make backup and recovery a little less resource-intensive. Infrastructure costs are significantly reduced as multiple virtual machines can be backed up onto a smaller number of physical devices. With the correct tools, these virtual backup environments can be easy to manage. It is benefits such as these that are helping make virtualisation the de facto choice for data backup.

However, despite virtualisation's ability to make backup and recovery more cost-effective, it has not yet made it perfect. The issue is one that has plagued data backup in the physical world since the beginning: testing.

It's all very well being able to backup and recover machines and even recover data down to the level of individual files in a backed-up machine. But if that machine is corrupted, or those files are unreadable, then the whole process has been a waste.

This isn't only an issue from a purely practical point of view. From a regulatory compliance perspective, there is no guarantee that an organisation's backup policy will stand up to inspection. The main problem is that, much too often, recovery testing is not performed regularly or on all machines. Instead, it is deferred or part-implemented – a highly unsatisfactory compromise.

Prioritising resources

The main reason that recovery testing isn't implemented correctly is, as with most things, a matter of available resources. To truly test a backup it must be recovered, which isn't a simple process. Generally, in order to save costs, backed up data is hosted on relatively inexpensive hardware, which is not suited to testing recoverability. This means that extra infrastructure is needed.

Either space has to be found on existing IT hardware to test backups, which carries the added risk of a failed test affecting the environment as a whole, or an entirely new infrastructure has to be built. Either way, creating and maintaining this testing environment will eat up time, money and expertise, which organisations may feel could be better spent elsewhere. Even if an organisation can allocate budget to the additional testing infrastructure, there is still the action of testing the recoverability of backups, which again is a hugely resource- and time-hungry process.

Testing therefore becomes a simple case of priorities. When working with limited resources, most organisations see testing and verification for an event that may never even happen as a lower priority than more immediate business concerns.

Even if testing is implemented, organisations still face the tough choice of deciding when data becomes expendable, as it simply has not been plausible to verify the recoverability of every backup. As data increasingly becomes the lifeblood of the business, these decisions become increasingly difficult.

Virtualisation as solution

There is a solution to this dilemma, and it lies in virtualisation. Security researchers have already seen the benefits of virtualisation to create live, isolated testing environments for assessing defences against malware and other threats without risking the overall infrastructure.

Using similar principles, organisations can create an isolated backup testing environment. The environment is created on the fly, for limited period, using any spare resources in an organisation's backup and production environments, removing the need to invest in new infrastructure.

Since its sole purpose is to verify that the backup recovers correctly, the overhead on the infrastructure is both temporary and minimal, requiring a fraction of the computing resources normally used in a production environment, because the virtual machine is running directly from the compressed backup and only publishing the content of the backup files to a host in the temporary virtual testing environment.

As a result, machines can be tested without risk in a fraction of the time, using a fraction of the infrastructure that was previously needed, meaning that testing every single backed-up server is no longer a pipe dream. In the digital age, data backup and recovery is a vital business function. Until now, testing the recoverability of all backups has been out of reach for most. Things have changed, and organisations have to face an inconvenient truth: Either they adapt to the changing technology climate, or they simply fuel the chances of disaster by using antiquated approaches to backup and recovery.

As virtualisation moves closer to being the de facto choice for data backup, testing and verification will become much simpler with the appropriate tools, providing organisations with a clear path to escape any risk of extinction.