Why organisations need a version of 'HR' for AI agents
The HR discipline is integral to people management in organisations, but as non-humans pick up more workload, who and what is their 'HR' equivalent internally?
Human resources is a core part of how organisations manage their people. We don't think about humans in an organisation without reference to the range of good practices deployed to manage them.
When a new person is brought into an organisation, it is not on an unmanaged basis. For all aspects of the way they participate in the organisation, the starting point and their ongoing point of contact is HR.
They are formally recruited, vetted, onboarded, and instructed in the accepted ways and methods of working. Their use of systems and access to data are governed and limited according to their role. Everything about them is tied back to an employee identity.
But a major change is occurring in how work is performed, and who it is performed by, in organisations across the spectrum of vertical industries. More and more organisations are turning to AI agents to perform tasks, either semi or fully autonomously. In some organisations, there are ambitions to have large 'workforces' of AI agents.
Based on the current trajectory, AI agents will outnumber their human counterparts significantly within a few short years. Organisations embracing agentic AI are starting to track their human-agent ratio or HAR as a key metric. In some organisations, parity - a 1:1 ratio of humans to agents - is not far off; McKinsey, for example, had 25,000 agents to its 40,000 employees as of late January.
But while the number of AI agents is rapidly increasing, fundamentally what we're not seeing at the moment across the vast majority of organisations we talk to is any structure around how these vast agent workforces are going to be used, controlled, or managed.
Put another way, the ubiquity of organisations trying to build AI agents is not in question. Where there's a live question is in the governance of these AI agent 'workforces'.
ISPM is the HR for agentic workers
Leading organisations are looking at what they do now to manage their people and trying to replicate some of this structure for AI agents. This means building and wrapping HR-like governance, systems and best practices around an agentic workforce.
Some key ingredients already exist. Notably, most organisations likely already have a team that is, on paper, well-suited to act as the 'HR' function to an agentic workforce. If organisations have a specific identity team or function within their broader cybersecurity team, then they are likely to be the ideal overseers. But feasibly, other teams could also carry out the role.
Perhaps more importantly, ISPM - identity and security posture management - practices and tooling can be used to build and wrap a 'HR'-like governance structure around agentic workforces. This is worth unpacking in some more detail.
The two key areas where organisations with growing agent workforces need support are visibility and context. Without this, organisations have no foundational basis for understanding where their risk is from an agent perspective and what actions they can or should take.
Visibility + Context = Insight for Action
It goes without saying that organisations need visibility over all identities - human and AI agents - that access their environment and system resources. This cannot be a 'one-and-done' process - it has to be performed continuously to maintain a complete and up-to-date record, particularly as the number of agents rapidly increases as more use cases are found.
Once visibility is established, organisations can move to the next stage and start to build context around what those identity types are being used for. Part of the future is being able to have insight into what AI agents are doing, what systems and data sources they are accessing, and whether or not this remains within their defined permissions or the organisation's risk tolerances.
Any time data in an ecosystem is touched or accessed, organisations should care and seek to understand what happened: Did access follow a set of common principles or guidelines? Is the context of the access understood? Are there any additional accountabilities required from a compliance perspective?
This is particularly important in the context of AI agents or assistants, particularly as they act in an increasingly autonomous fashion, and interact with one another through agent-to-agent communications and/or in multi-agent processes.
Organisations need to understand how the privilege that governs access to systems and data is assigned to them. For example, AI agents that act as assistants may leverage an employee's identity and permissions to do things on their behalf. As identity or security experts, our job is to ask the hard questions about whether this is a good model or not. Should an AI agent inherit the permissions of the employee they are assisting, or get their own set of permissions tailored to the specific context of the task they are meant to perform?
With visibility and context comes insight and understanding, which organisations can use to structure their agent workforces more effectively and de-risk the expansion of this non-human workforce. ISPM helps organisations to qualify and quantify how agents are behaving and whether that activity aligns with internal expectations. If not, organisations have the insight and tooling to take definitive action, ensuring they stay in control of their agentic workforces.