Exclusive: Diligent's Simon Berglund on how AI is changing the GRC landscape
In the rapidly evolving world of Governance, Risk, and Compliance (GRC), artificial intelligence (AI) is becoming a key player.
Simon Berglund, Senior Vice President and General Manager for APAC at Diligent, a leading GRC company, sat down to discuss how AI is changing the landscape, as well as the opportunities and risks it brings.
"Diligent is the world's leading GRC technology company, as recently verified by Forrester Research," Berglund explained. He emphasised that the company enables its customers to drive strategic success by ensuring compliance and collaboration at the highest levels. AI, he believes, is central to this goal, empowering leaders to make better decisions.
Berglund noted the foundational role of AI in enabling boards and senior executives to focus on what really matters.
"Board directors and senior leaders are responsible for two things: compliance with legislation and internal policies, and strategic success. Deploying AI in a smart, contextually relevant way empowers those leaders to focus on delivering an optimised strategy."
AI's potential to enhance board performance and risk management is undeniable. According to Berglund, leaders are always seeking ways to simplify their roles and deliver more insightful decisions.
"AI provides the power to analyse complex information, summarise it, make recommendations, and provide insights into emerging risks. It allows leaders to deliver their obligations more effectively."
While AI holds great promise, Berglund acknowledged that there are barriers to adoption in the GRC space. Trust, he explained, is one of the primary concerns.
"Most people have become accustomed to public AI technologies where content is used without permission and shared publicly. At Diligent, we require our customers to opt into the use of AI, and we do not use any customer's content for training models used by other customers."
Berglund further explained that explainability is crucial when integrating AI into GRC functions. He stressed that organisations must trust that AI-generated content is secure and understand how it works. "Users must be able to understand and explain the AI's decisions, both in real time and retrospectively, when decisions are revisited."
When it comes to implementing AI, Berglund had clear advice for organisations. He urged companies to seek comprehensive demonstrations of AI in a production-level environment, rather than relying on polished but impractical presentations. "Users need to see its actual use before they can make an informed decision as to whether it's going to be usable for them," he said.
This hands-on approach extends beyond just seeing results; the experience of generating those results is equally important.
"Is it smooth, seamless, understandable, and intuitive? The last thing you want is for users to push back on something you've invested in," Berglund added.
Data governance, he noted, is another critical factor when adopting AI in GRC.
"All organisations should include AI in their acceptable use policies, because almost everyone is using some form of AI to do their job," he said.
"Mature organisations should also establish risk management controls and monitor AI's use, just as they would any other person or cognitive entity."
Berglund also discussed the ethical concerns that come with AI, particularly the issue of bias. "Bias is present everywhere, even in people, and it's also present in machines. The deployment of AI should be rigorously documented, tracked, and controlled as part of a holistic risk, audit, and compliance framework."
Selecting the right AI tool for GRC is no simple task. Berglund advised organisations to look for technology partners whose DNA includes GRC, ensuring that both the opportunities and risks of AI are fully understood. "AI should be a part of the GRC stack, not separate from it. This ensures business continuity, ethics, and compliance."
A comprehensive AI implementation plan is essential for success, according to Berglund. He explained that if the AI tool is part of the GRC technology being used, implementation can be as simple as "turning on a switch" and training users.
However, using an external AI tool introduces additional risks that must be managed carefully.
Reflecting on the risks associated with AI, Berglund noted that executives must remain vigilant in verifying AI-generated outputs. "The risk is that busy executives may become less attentive in verifying the accuracy and fit for purpose of AI outputs. Over time, they may develop a deeper assumption that the AI is generating accurate information."
Diligent, however, has taken steps to mitigate these risks. "Any AI-generated output on the Diligent platform is labelled as such. It hyperlinks to the original detailed information, allowing users to verify that the AI's recommendations are consistent and accurate."
Berglund believes that transparency and accountability are key to successful AI deployment. By clearly labelling AI-generated content and linking it to the original source material, Diligent provides users with the tools they need to ensure AI's output is trustworthy and explainable.
As AI continues to revolutionise GRC, organisations will need to carefully consider their approach to adoption.
For Berglund, the key is trust: trust in the technology, trust in the vendor, and trust in the process.
"The trust we establish with our customers is paramount," Berglund concluded. "AI's immense power brings great risks, but by ensuring that it is used ethically and transparently, we can unlock its full potential for governance, risk, and compliance."