Okta warns Australian firms on AI agent security gap
Mon, 4th May 2026 (Today)
Okta has released its Businesses at Work 2026 report in Australia and made Okta for AI Agents generally available.
The findings suggest Australian organisations are adopting AI agents faster than they are securing them.
The report points to a sharp rise in non-human identities, including AI agents, bots and service accounts, inside corporate systems. In some environments, these identities now outnumber human users by as much as 45 to 1.
That growth is not being matched by security readiness. Only 10% of organisations said their identity systems are fully equipped to manage and secure non-human identities, while 41% said no single team is responsible for AI security.
The figures suggest many businesses are moving AI tools into day-to-day operations before establishing clear governance over risk ownership. They also show identity and access management becoming a central issue as AI systems gain access to data, applications and workflows.
Control gap
Rapid AI agent adoption is widening the gap between deployment and oversight. Unlike traditional software tools, AI agents can interact with multiple applications, use sensitive data and take actions across systems without ongoing human supervision.
This shift is creating a new category of identity risk, as many agents are not tracked or governed in the same way as employees or contractors. The problem is compounded by shadow AI, where tools or agents are used without formal approval or visibility.
Industry research cited in the report suggests the problem is already surfacing in security incidents. About 88% of organisations reported confirmed or suspected AI agent-related security incidents, while only 22% said identities were tied to those agents.
Mike Reddie, Vice President and General Manager ANZ at Okta, said organisations now face a control problem rather than a simple adoption challenge.
"As organisations move from experimenting with AI to embedding it into everyday operations, the challenge is no longer adoption, it's control," Reddie said.
"AI agents are effectively becoming a new workforce. Without visibility and control over those identities, organisations risk introducing new security gaps at scale."
Reddie said AI security is tied to identity systems rather than representing a wholly new class of threat.
"AI doesn't introduce a new security problem, it amplifies an existing one: identity. If organisations want to scale AI safely, they need to start with visibility, access controls and governance. That's what enables control at scale."
New product
Alongside the report, Okta for AI Agents is now generally available. The product is designed to help organisations manage AI agents as identities across cloud platforms, software applications and AI frameworks.
Okta described the offering as vendor-neutral and said it extends identity security controls beyond human users. It is intended to help businesses discover AI agents, bring them under management and apply rules governing access and activity.
The product is built around three questions: where AI agents are, what they can connect to and what they can do. In practice, that means identifying agents across cloud, SaaS and custom environments, setting scoped and short-lived credentials, and governing agent actions through approvals, audit trails and revocation controls.
It also aims to address unmanaged deployments by detecting shadow AI and allowing organisations to deactivate agents or withdraw access when needed. That becomes more significant as companies introduce autonomous or semi-autonomous software into internal workflows, customer service and operational processes.
Security focus
The findings add to the broader debate over whether businesses have put enough basic security controls in place before rolling out generative AI and agent-based systems. As organisations increase the number of non-human identities in their systems, the challenge extends beyond authentication to governance, accountability and lifecycle management.
For Australian organisations, the report suggests many are still at an early stage in securing these identities even as AI use expands. The result is a mismatch between the scale of adoption and the maturity of oversight, particularly where no single security or technology team has clear responsibility.
Okta argues that identity systems should become the main control layer for both human and non-human users. In that model, AI agents are treated as first-class identities, subject to the same visibility, access restrictions and governance standards as other actors within an organisation.
Those controls, Okta said, can help reduce excessive access, improve visibility into system activity and manage the full lifecycle of AI agents, including revoking access and intervening in real time.