IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

New guidelines released for AI auditing by Cloud Security Alliance

Today

The Cloud Security Alliance has released new guidelines for auditing artificial intelligence systems that extend beyond compliance requirements.

The document, titled "Artificial Intelligence (AI) Risk Management: Thinking Beyond Regulatory Boundaries," provides a comprehensive framework for evaluating AI systems. It has been developed by the AI Governance & Compliance Working Group of the Cloud Security Alliance (CSA).

Ryan Gifford, Research Analyst at Cloud Security Alliance, who is also part of the Working Group's leadership team, said: "The ubiquitousness of intelligent systems in today's world requires that auditors are not only willing but able to assess these systems beyond simply ticking checkboxes. While the need for accurate, purposeful, and results-based AI auditing is mission-critical, trust in AI can only be achieved through a far-reaching approach to auditing that goes beyond what's required. It's our hope that auditors can begin to address compliance proactively and thoroughly, utilizing the framework outlined in this document."

The guidelines follow the document AI Resilience: A Revolutionary Benchmarking Model for AI Safety and are intended to apply across various industries. They focus on privacy, security, and trust through an approach that encourages critical and investigative thinking. The guidelines are structured to help auditors evaluate AI systems for unintended behaviors.

Building on existing best practices, the document addresses the entire AI lifecycle, from development and deployment to monitoring and decommissioning. It includes sample questions for audits and assessments, providing foundational knowledge on AI resilience, types of AI systems, and other key concepts such as responsibility, accountability, and liability.

Sections within the guidelines cover AI governance, applicable legal standards, management of third-party suppliers, and infrastructure. The aim is to mitigate risks and enhance transparency, ensuring AI systems are both compliant and trustworthy.

The AI Governance & Compliance Working Group seeks to be a central figure in establishing governance and compliance standards for artificial intelligence. It aims to influence policy and legislation, and set benchmarks for best practices.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X