Red Hat buys Chatterbox to bolster AI safety tools
Red Hat has acquired UK-based artificial intelligence safety specialist Chatterbox Labs in a deal that adds automated security testing and guardrails technology to the Red Hat AI product suite.
The company plans to integrate Chatterbox's technology across its hybrid cloud AI platform. It intends to offer independent risk assessment and controls for a range of AI models and deployment environments.
AI risk focus
Chatterbox Labs, founded in 2011, specialises in quantitative AI risk assessment and transparency. It has developed tools that test generative and predictive models and generate metrics on robustness, fairness and explainability.
The company's products include AIMI for generative AI, which produces independent risk metrics for large language models. It also offers AIMI for predictive AI, which validates a broad set of model architectures. A separate guardrails product analyses prompts and responses for insecure, toxic or biased content before deployment.
Red Hat said the acquisition would add "security for AI" offerings alongside existing machine learning operations tools. The combined portfolio will sit within Red Hat AI, which includes the Red Hat AI Inference Server and the latest Red Hat AI 3 release.
As more organisations implement generative, predictive and agentic AI applications, technology suppliers are under pressure to demonstrate controls over safety, bias and governance. Regulators in major markets are also examining technical standards for AI testing and reporting.
Hybrid cloud strategy
Red Hat plans to align Chatterbox's guardrail products with its focus on hybrid and multi-cloud deployments. Customers will be able to use the safety tools alongside Red Hat's infrastructure software and AI lifecycle management services.
The vendor positions the combined platform as suitable for a range of hardware accelerators and for both on-premises and public cloud environments. It expects this approach to appeal to organisations that run sensitive workloads across multiple locations and providers.
Chatterbox has investigated the security of agentic AI systems that can act autonomously and interact with external tools. This includes monitoring agent responses and detecting Model Context Protocol server action triggers. Red Hat said this research aligns with its roadmap for the Llama Stack and MCP support.
As a result, the company expects to cover newer AI patterns that link models with business systems and automation platforms. It sees greater need for monitoring and guardrails as AI agents gain broader permissions over data and transactions.
Industry positioning
Red Hat's parent IBM has made trust and security a central theme of its own AI strategy. Both companies promote an open source approach in which customers can inspect and modify components rather than rely solely on proprietary services.
In this context, Red Hat is positioning the Chatterbox purchase as a way to embed testing and safety into its standard AI tooling. It expects customers in regulated sectors, including financial services and healthcare, to demand quantitative evidence before approving models for live use.
Steven Huels, Vice President, AI Engineering and Product Strategy, Red Hat, said the pace of AI deployment is raising new concerns.
"Chatterbox Labs's innovative, model-agnostic safety testing and guardrail technology is the critical 'security for AI' layer that the industry needs. By integrating Chatterbox Labs into the Red Hat AI portfolio, we are strengthening our promise to customers to provide a comprehensive, open source platform that not only enables them to run any model, anywhere, but to do so with the confidence that safety is built in from the start," said Huels.
Open source angle
Chatterbox Labs has worked with independent think tanks and policy groups on AI safety frameworks. Its tools generate structured metrics that can support regulatory reporting and internal governance.
Stuart Battersby, Co-founder and Chief Technology Officer, Chatterbox Labs, said safety controls should not sit behind proprietary interfaces. In his view, the market needs transparent measures that customers and auditors can verify independently.
"As AI systems proliferate across every aspect of business and society, we cannot allow safety to become a proprietary black box. It is critical that AI guardrails are not merely deployed; they must be rigorously tested and supported by demonstrable metrics. Chatterbox Labs has pioneered this discipline from the early days of predictive AI through to the agentic systems of tomorrow," said Battersby.