Businesses need to take responsibility for recoverability of data, even if it is in the cloud, says StorageCraft’s Greg Wyman.
Life is good when cloud backup is drifting along peacefully, which is most of the time. Users happily assume their data is encrypted and secure, but is it easy to recover? Even in the cloud, data recoverability is critical and organisations need to take responsibility for that recoverability.
Migrating company data effectively into the cloud can be an issue too. Traditional physical and virtual migration products are expensive, painful and disruptive for both physical and virtual servers.
The new breed of real-time recovery solutions deliver near zero impact migrations. They protect data in real-time every 15 minutes, even for complex databases. Usually only a relatively small volume of data changes at the sector level every 15 minutes.
These small real-time incremental sector-based backups are replicated to the cloud provider or data centre. Often the first backup is sent via ‘sneaker-net’ and USB/NAS devices, with the incrementals being replicated in real time.
Once the cloud site has caught up with production virtual and/or physical servers, a business can cut over to the cloud. At say 7pm, force everyone to log off, create a last incremental, replicate it to the cloud and finalise the job by spinning up the customer’s production server in a virtual environment (typically Hyper-V or VMware). Reinstate connections and when users log on, they are running from the cloud with data from a few minutes ago.
Equally compelling is the rollback process if the unexpected happens - simply turn on the production servers at the customer’s site and everything works.
In general the same three principles of data protection apply in the cloud as at local site.
1. Recovery time objective RTO) – how long a business is prepared to be down.
2. Recovery point objective (RPO) - how far back you go to perform a ‘clean’ data restore.
3. Test recovery objective (TRO) - the time to which a business is completely confident of restoring data.
Most businesses are advised to store their data in two, if not three, locations with at least one being remote. Where possible, regular recovery tests across all servers and sites help to maximise recoverability chances. If the process is automated, this should have zero or minimal impact on support staff.
In the cloud, perform a local backup, test recoverability by automatically running Microsoft Checkdisk on the backup volume. Next replicate these backups to a different cloud provider, where recoverability is re-tested automatically. For critical data / databases, replicate to a cloud that delivers real-time disaster recovery of critical servers with the ability to spin up critical servers in minutes and have them restored to 15 minutes ago.
Finally test and test again to ensure data, databases and applications are recoverable quickly and reliably.
Greg Wyman is vice president Asia Pacific for StorageCraft which provides backup, disaster recovery, system migration and data protection solutions.