F5 launches AI Guardrails & Red Team for runtime security
F5 has launched two new products for AI runtime security, F5 AI Guardrails and F5 AI Red Team, announcing that both are now generally available.
The company positioned the releases as part of a broader approach spanning testing and runtime protection for enterprise AI deployments. F5 also stated that the products extend its Application Delivery and Security Platform.
"Traditional enterprise governance cannot keep up with the velocity of AI," said Kunal Anand, Chief Product Officer, F5. "When policy lags adoption, you get data leaks and unpredictable model behaviour. Organisations need defences that are as dynamic as the models themselves. F5 AI Guardrails secures the traffic in real time, turning a black box into a transparent system, while F5 AI Red Team proactively finds vulnerabilities before they reach production. This allows organisations to stop fearing risk and start shipping apps and features with confidence."
F5 framed the announcement around increasing use of AI across customer-facing services, internal workflows, and higher-stakes decision making. The company said enterprises face a mix of threats and operational risks. It cited adversarial manipulation of models, data leakage, unpredictable user interactions, and rising compliance obligations.
Runtime controls
F5 AI Guardrails provides runtime security controls for AI models, applications, and AI agents, according to the company. F5 described it as model-agnostic and said it can apply consistent policy enforcement across different clouds and deployment environments.
F5 said the product aims to address threats such as prompt injection and jailbreak attacks. It also said the product can prevent sensitive data leakage and enforce corporate and regulatory obligations. The company cited GDPR and the EU AI Act as examples.
F5 has also highlighted the observability and auditability features within AI Guardrails. The company stated that these functions provide visibility into AI inputs and outputs, allowing teams to examine outcomes and the reasoning behind model behaviour. F5 described this transparency as essential for governance and compliance within regulated industries.
Adversarial testing
F5 AI Red Team focuses on automated adversarial testing, with the company noting that it simulates both common and niche threat vectors while scaling testing across various deployments.
The company also highlighted an AI vulnerability database associated with the product. F5 stated that this database is updated with more than 10,000 new attack techniques each month as threats evolve.
F5 said AI Red Team identifies where models can produce dangerous or unpredictable outputs. It also said findings feed back into AI Guardrails policies. The company described the combined workflow as a continuous feedback loop that links proactive testing with adaptive policy enforcement.
Platform positioning
F5 has stated that the two products integrate with the existing security controls already utilised by many enterprises. The company cited API security, web application firewalls, and DDoS defences as integral components of the wider stack. F5 noted that this approach secures AI systems alongside existing applications, reducing the reliance on disparate point products.
F5 also said the new tools align with customer requirements around deployment flexibility and protection across different models. It said organisations can tailor security policies and adapt them in real time.
The company said both products are already deployed at Fortune 500 enterprises across multiple industries. It cited highly regulated financial services and healthcare organisations as examples.
F5 has confirmed that F5 AI Guardrails and F5 AI Red Team are available immediately as part of the F5 Application Delivery and Security Platform.