Data centre maintenance keeping you up? HPE says automation's the answer
Always-on uptime in a data center is absolutely essential to business success, and ensuring uninterrupted service requires constant vigilance and maintenance. This need for constant upkeep and reliance on infrastructure only looks set to increase as organisations increasingly deploy more business-critical applications.
While there is continuous innovation to introduce new infrastructure management tools, many still fall short of achieving the enhanced automation and lowered maintenance requirements that the industry covets. As a result, many IT professionals are still wasting days and nights – possibly even missing important birthdays and anniversaries – to deal with issues that require manual tuning.
A major pain point that continuously surfaces during conversations with customers is how maintenance cycles still require human intervention. Furthermore, it is a large drain on operating budgets, with data center operators spending a huge proportion of their budgets on keeping the lights on.
This begs the question of why maintenance is still keeping operators up at night despite the constant introduction of new tools to deal with the problem. What are we really missing?
The shortfalls of traditional infrastructure tools
Truly removing the burden of managing infrastructure requires having the foresight to predict problems before they occur, along with being able to provide deep insightful intelligence of underlying workloads and resources for better infrastructure optimisation.
Lose sleep over data center maintenance no further. Consider these four factors to determine if your tools are falling short in overcoming frustrating maintenance problems:
1) They don't learn from others
Analytics that simply report on local system metrics tend to offer limited value. Instead, what you should look for in a tool is its ability to learn from the behaviour of thousands of peer systems, so as to aid in detection and diagnosis of developing issues. In a sense where it is said that two minds are better than one, a thousand are infinitely more so.
A holistic approach to data collection and analysis can pool observations from an immense variety of workloads. This allows rare events identified at one site to be pre-emptively avoided at another, and for more common events to be detected quicker with greater accuracy.
2) Failing to see the whole picture
Traditional tools often only provide analytics in a siloed fashion; providing only system status per device, which is just one part of the overall story. With problems that disrupt applications popping up anywhere in the infrastructure stack, it is important to have the ability to conduct cross-stack analytics across multiple layers to get the bigger picture. This will require crucial components such as applications, compute, virtualisation, databases, networks and storage.
3) They don't know enough
Predictive modelling requires deep domain experience – understanding all the operating, environmental, and telemetry parameters within each system in the infrastructure stack. General-purpose analytics can only go so deep. However, pairing domain experts with AI can enable machine-learning algorithms to identify causation from historical events, and in turn, predict the most complex and damaging problems.
4) They can't act without you
Perhaps the biggest drawback of traditional tools is their inability to act. In the ideal state of autonomous operations, the data center would be self-managing, self-healing and self-optimising. In essence, they should be able to avoid a problem or improve the environment without the need for human intervention from an administrator. To achieve this level of automation would require a proven history of automated recommendations that provide the necessary level of trust and confidence.
The future of data center maintenance
To overcome the limitation of traditional tools and convincingly reduce maintenance requirements – and better automate a data center – one would have to embrace a new generation of AI solutions. This means leveraging tools that are able to observe, learn, predict, recommend and ultimately, automate.
Through observation, AI will be able to develop a steady-state understanding of ideal operating environments for various workloads and applications. Deep system telemetry coupled with global connectivity allows for rapid cloud-enabled machine learning, resulting in AI tools being able to quickly predict problems through pattern-matching algorithms. Application performance can even be modelled and tuned for new infrastructure based on past historical configurations and workload patterns.
Based on these predictive analytics, AI solutions can determine appropriate responses required to improve the data center environment. The pressure is then taken off IT teams – and they no longer have to work through the night to find the source of the problem when managing infrastructure. More importantly, in the event that the AI proves to be effective, recommendations can then be applied automatically without the intervention of IT administrators. That to me, is achieving the holy grail of automation.
At HPE, we have seen how our customers utilising AI tools are able to predict and resolve issues automatically 86 per cent of the time. Furthermore, they spend 85 per cent less time on storage issues and even enjoy a 79 per cent reduction in IT storage operating expenditures. The advantages of deploying AI to assist in data center infrastructure is undeniable.
Furthermore, with technological advancements set to invigorate all sectors of the Asia Pacific economy, the highly-diverse region is expected to experience a talent shortage of 2 million IT professionals by 2030 (Korn Ferry - The Global Talent Crunch). I'm certainly looking forward to the not so distant future where automation will be the next frontier in data center management – and of course, getting a good night's rest.