IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Cloud fortress metal shields with gated data streams ai security

SentinelOne debuts lifecycle platform for AI security

Tue, 10th Feb 2026

SentinelOne has expanded its AI Security Platform with data security posture management (DSPM) tools designed to address risks from the moment data enters an AI pipeline through to runtime operations.

It framed the move as a response to enterprise AI adoption as models and agents shift from testing into production workflows. That transition is increasing the volume of corporate data flowing into AI systems and expanding the range of cloud services and workloads involved in training and inference.

Data controls

The new DSPM capabilities sit at the start of the AI lifecycle, focusing on cloud object stores and databases. The goal is to identify and control sensitive or high-risk data before it is used for training or inference.

Security teams are particularly concerned about the exposure of personal data, credentials, and proprietary information. Once this data enters a training pipeline, it can be difficult to remove from a model later. SentinelOne said DSPM is intended to reduce the likelihood of data being retained or reproduced in ways that create compliance and privacy issues.

It also highlighted risks from the manipulation of data pipelines, including pipeline poisoning and the use of ingestion paths to introduce malicious content. The DSPM approach includes scanning cloud storage for malicious content during ingestion.

Platform scope

DSPM is part of a broader platform bundle that also includes cloud security posture management (CSPM), AI security posture management (AI-SPM), runtime workload protection, controls focused on employee use of generative AI, and agent security.

CSPM typically addresses policy drift, misconfigurations, and excessive permissions across cloud environments. AI-SPM extends those checks into AI-specific environments such as training jobs, development notebooks, managed AI services, and inference endpoints.

SentinelOne said the combined capabilities help security teams map connections across data, infrastructure, and AI systems. The intent is to identify attack paths that start with exposed storage or misconfigured services and lead into model logic or downstream applications.

Runtime risks

Runtime protection is positioned as another key stage in the lifecycle. These controls focus on AI systems operating in production, where they handle live user interactions, business data, and operational processes.

In production, AI models and agents run inside cloud workloads, which can be targets for exploitation and unauthorised access. SentinelOne said its runtime protections address risks such as lateral movement within cloud environments and compromise of workloads hosting AI systems.

Cloud workload protection platforms monitor compute environments and enforce protections during execution. SentinelOne described its runtime approach as a continuation of the same security model rather than a separate control layer.

Prompt Security

The expansion also adds model red teaming, validation and guardrails, and MCP Security via Prompt Security. Prompt-based attacks have become a central concern for organisations deploying generative AI, since prompts and model inputs can be used to manipulate system behaviour or extract data.

Red teaming and validation test models with adversarial prompts and simulated attacks, aiming to identify weaknesses in model behaviour and policy bypass paths before systems are widely deployed in production.

Regulation pressure

SentinelOne tied its platform approach to regulatory exposure as AI systems become embedded in enterprise workflows. It also pointed to compliance tracking over time against global AI regulations, including the EU AI Act, within its cloud security offering.

Many organisations face parallel requirements: meeting privacy obligations for personal data, implementing access controls and auditability, and understanding how models use data and whether outputs might expose sensitive information.

SentinelOne said security teams need a lifecycle view because weaknesses at one stage can create downstream impacts that are difficult to remediate. It also argued that AI adoption increases the scale and speed at which risks can spread across data stores, cloud infrastructure, and runtime environments.

Gregor Stewart, SentinelOne's Chief AI Officer, said the company views AI security as an end-to-end discipline rather than a narrow product category.

"As AI systems become more powerful and more autonomous, security must evolve to match that reality," Stewart said. "AI security is not a point problem. It is a lifecycle problem. Data security is the first mile, but true protection requires securing everything AI is built on, from data and infrastructure to runtime behavior."

SentinelOne said it will continue building around a unified approach that connects data security, cloud posture management, AI-specific posture controls, and runtime workload protection across the AI lifecycle.