IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Andre

Ping Identity warns of AI agent authorisation risks

Wed, 29th Apr 2026 (Today)

Ping Identity has published commissioned research on authorisation risks linked to AI agents in enterprise systems. The report argues that existing identity models are struggling to govern autonomous software operating at runtime.

Produced by KuppingerCole Analysts, the research says companies are deploying AI agents into live environments faster than they are putting suitable controls in place. It identifies a failure mode in which agents combine permissions that are individually valid but can produce actions that bypass established controls.

This shifts attention from the basic question of who or what has access to systems to how that access is used once an agent is active. Traditional identity and access management tools, the report says, were built around human users and discrete access decisions, not software entities acting continuously across applications, data and workflows.

The gap emerges when AI agents operate beyond the limits of static access controls. In that setting, the concern is not simply whether permission has been granted, but whether an organisation can enforce policy, maintain oversight and trace decisions as they happen.

Andre Durand, Chief Executive Officer & Founder, Ping Identity, said the pace of AI deployment is outstripping governance.

"Enterprises are deploying autonomous AI faster than they can govern it," Durand said.

"Identity remains foundational, but in an agentic environment it must operate continuously. Control must be enforced at the moment an action occurs."

Where risks arise

The report points to several weak spots in current identity frameworks. One is delegation opacity, where AI agents can trigger sub-agents or pass tasks along chains that become difficult to audit. Another is that common identity standards such as OAuth and OpenID Connect rely on assumptions about human decision-making that may not hold when autonomous agents interact directly with other systems.

It also highlights the risk of context leaking across systems if authorisation decisions are not continuously re-evaluated. That raises new questions about permission inheritance, accountability and liability when one agent acts through another or on behalf of a human user.

According to the research, these pressures are already becoming visible in real-world security incidents. It cites IBM's 2025 Cost of a Data Breach report, which found that 13% of organisations had experienced AI-related security breaches and that 97% lacked adequate access controls for AI systems.

Those figures are paired with examples such as enterprise data leaks and prompt injection attacks, which the report says show how gaps in AI governance can be exploited. The broader concern is that many organisations still treat AI as an extension of user access management rather than as a distinct operational identity problem.

Governance model

To address this, the analysts set out a governance framework built around identity, policy-based authorisation, oversight and accountability. The model extends zero trust principles to continuous authorisation, aiming to check not only identity at login or token issuance but also intent, context and policy at the point of action.

The report argues that this kind of runtime control will be needed as AI systems move from experimentation to operational use. In practice, that means organisations may need tools that can assign distinct identities to agents, apply rules dynamically and preserve clear human accountability for automated actions.

Martin Kuppinger, Founder, KuppingerCole Analysts, linked the issue to a broader change in identity requirements.

"These trends reflect a broader shift in identity requirements," Kuppinger said.

"As autonomous agents become more prevalent, organisations will need to extend identity and authorisation models to maintain control, accountability, and trust across increasingly dynamic environments."

Ping Identity said its Identity for AI products are intended to align with those principles, including runtime identity, policy-based authorisation and governance controls for AI agents. It also said KuppingerCole had recognised its approach to assigning unique identities to agents, applying policy controls and maintaining human accountability in AI-driven processes.

The research is framed as part of a wider industry adjustment as businesses move AI systems from pilots into production. Its central argument is that access alone is no longer enough when software agents can act autonomously across enterprise systems, and that control must persist while those actions are being carried out.