IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Flux result 17adfae6 f29c 48aa 930d f37b80395414

Australian boards pressured to rush AI despite risks

Thu, 26th Mar 2026

TrendAI has released research on AI adoption and governance in Australian organisations, pointing to growing pressure on boards and executives to move ahead despite security and compliance concerns.

The study surveyed 3,700 business and IT decision-makers across 23 countries, including Australia, and found a gap between the pace of AI deployment and the controls around it. In Australia, 66% of business decision-makers said they had felt pressure to approve AI initiatives that posed potential security or compliance risks.

That tension sits alongside relatively strong confidence in readiness. Eighty per cent of Australian business decision-makers said they felt prepared for AI adoption, yet 44% lacked confidence in their understanding of legal and governance frameworks.

Training also remains limited. Only 26% of business decision-makers in Australia said they had completed formal, mandatory AI training, while most organisations were still only at a partial implementation stage.

Governance gap

The research suggests AI adoption in Australia is moving faster than many organisations can manage safely. Some 68% of respondents said AI was advancing more quickly than they could secure it.

Although 64% of Australian organisations reported having comprehensive AI policies in place, more than 40% still cited unclear regulation or compliance standards, along with weak internal policy and governance, as barriers to safe adoption. The data suggests many companies are rolling out AI before internal rules are fully settled.

The global results showed a similar pattern, with 67% of respondents across the full survey group saying they had been pressured to approve AI despite security concerns.

In Australia, nearly one in five respondents described those concerns as extreme but said they were overridden to keep up with competitors and internal demand. That can leave security teams reacting to decisions made elsewhere in the business, increasing the risk of workarounds and unsanctioned AI tools.

Rachel Jin, Chief Platform & Business Officer and Head of TrendAI, outlined the company's view of the issue. "Organisations are not lacking awareness of risk, they're lacking the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely. This research reinforces our focus on helping organisations drive solid business outcomes with AI while still managing business risk," Jin said.

Leadership divide

The findings also point to a divide between business leaders and technical teams on accountability and oversight. In Australia, less than half of business decision-makers, 42%, said a human should always remain in the loop in AI-driven security operations.

Support for emergency controls was also mixed. Up to 54% of Australian respondents backed the introduction of AI kill-switch mechanisms to shut down systems in the event of failure or misuse, while nearly half remained unsure.

TrendAI said that uncertainty reflects a broader problem as organisations move towards systems with greater autonomy. Confidence in agentic AI remains limited, with only 44% of global respondents believing it would significantly improve cyber defence in the short term.

Australian respondents identified data access as the leading concern. Almost half, 45%, said AI agents accessing sensitive data was the biggest risk, while 34% pointed to autonomous code deployment. Nearly one in three cited abuse of trusted AI status, and 30% pointed to hallucinations or false outputs.

Observability remains another concern. Globally, 31% of business decision-makers said they lacked observability or auditability over these systems, raising questions about how organisations can monitor and intervene once they are in place.

Threat landscape

TrendAI linked those internal governance concerns to a changing cyber threat environment, citing its own threat research showing attackers are already using AI to automate reconnaissance, accelerate phishing campaigns and lower the barrier to entry for cybercrime.

Andrew Philp, Field CISO, ANZ at TrendAI, was identified as available to discuss the local results, including the rise of agentic and autonomous AI workloads and a reported 35% increase in publicly disclosed AI vulnerabilities over the past year.

Srujan Talakokkula, Managing Director ANZ at TrendAI, said the Australian and New Zealand results showed both optimism and uncertainty. "While many organisations across Australia and New Zealand report strong confidence in AI preparedness and strong recognition of AI's role in combating AI-driven threats, there is a clear gap in understanding of legal frameworks governing AI and differing views on accountability and human oversight across both business and IT leadership.

"With governance challenges intensifying and AI-driven threats becoming more sophisticated, visibility of assets and risk management across the entire AI lifecycle is critical. This research highlights the importance of working with trusted partners that allow organisations to safely deploy and scale AI," Talakokkula said.

Jin said the move towards more autonomous systems is creating a new set of exposures for companies that have not yet agreed on how much control humans should retain. "Agentic AI is moving organisations into a new risk category. Our research shows the concerns are already clear, from sensitive data exposure to loss of oversight. Without visibility and control, organisations are deploying systems they don't fully understand or govern, and that risk is only going to increase unless action is taken," Jin said.