In the past few months, the number of outages across the financial, telecommunications and utilities sectors around the Asia Pacific region have occurred with alarming regularity. While some like the electricity outage in South Australia and Auckland were a result of a force of nature, outages at one of Australia’s big four banks were a result of system failures. At Australian telecommunications provider Telstra, senior executives have lost their bonuses due to their outage headaches.
A new report by Ovum and Zerto entitled ‘New catalysts setting the pace for disaster recovery initiatives’, which surveyed 400 enterprises across the Asia Pacific region, found that the biggest causes of disruption are natural disaster (55%) and IT hardware failure (55%), followed by power failure (45%) and network failure (43%).
The report also found that 72% of organisations experienced at least an hour of data loss after a disruption, with 33% experiencing a loss of one to five hours of data, making it the largest group of enterprises. For those who missed their recovery expectations, 78% of organisations missed by less than 60 minutes for mission-critical applications. For non-mission-critical applications, that proportion was 64%
The cost of downtime to a company can exceed $100,000 per hour for medium-sized enterprises. Organisations interested in gaining better insights to revenue, sales and productivity loss in a downtime can use a downtime calculator to get a big picture overview of how this impacts their enterprise.
So it’s not a question of ‘if’ your systems are going to go down, but ‘when’. Downtime is extremely expensive and for companies who don’t get back up and running quickly are at huge risk of going out of business altogether. More damaging than a financial loss for some companies is the loss of its reputation which can result in a lack of customer confidence and loss of business.
Below are my top three tips on how to reduce downtime if your business is hit by an outage:
The bare minimum customers expect from their service providers is reliability and on demand access; anything less than this and their trust and custom is gone. As in the case of bank outages, shoppers both domestic and overseas found that they were unable to pay for their petrol, restaurant and grocery bills, leaving many frustrated and taking to social media to vent their frustrations.
In today’s digital age, investing in a robust business continuity (BC) and disaster recovery (DR) solution for resiliency is critical to maintaining customer trust.
In addition to some of the usual factors causing disruption and outages among enterprises such as natural disasters and power failure, cyberattacks are becoming more commonplace.
So consider cloud-based recovery as is it is more difficult for potential hackers to launch a sustained attack against a cloud provider, compared to a static data centre. The cloud also offers greater capacity, helping to protect against Distributed Denial of Service attacks (DDOS). The near infinite bandwidth and capacity of cloud providers means they can also be near enough immune to volume-based attacks.
Having a DR strategy in place is not often enough. Companies need to test it regularly to guarantee business continuity for their customers. Testing can be one of the most complex aspects of DR which can deter many organisations from running them regularly, but this leaves them vulnerable. It’s like holding an out of date insurance policy. Businesses need to make sure their DR system works and that they can recover within minutes.
In the event a disaster does strike, the first few minutes are critical and businesses need to be able to recover as quickly as possible. With cloud-based BC/DR, organisations can revert back to minutes before the outages, ensuring they have up to date resources and files. Relying on incomplete backups taken 12-24 hours previously could take hours to restore, leaving organisations with downtime they cannot afford.
By Matthew Kates, ANZ country manager, Zerto