IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

The security challenges posed by increasing use of generative AI

Today

The rapid rise of generative AI (GenAI) tools is transforming productivity, enhancing capabilities across computer code generation, data analysis, and corporate writing. Yet, with this transformation comes a new set of security and governance challenges that demand urgent attention from risk management, security, legal, and compliance teams.

Balancing Benefits and Security Risks of GenAI
GenAI tools can process vast amounts of data and generate unique responses to user prompts. This flexibility creates a broad and often unpredictable risk surface, requiring risk management frameworks that can adapt to GenAI's evolving capabilities. From a security standpoint, organisations need controls that ensure data remains secure and that privacy protections are in place, as misuse of sensitive data can have significant legal and commercial consequences.

Unique Challenges of GenAI for Risk Management
Unlike traditional software applications, GenAI systems are non-deterministic in that a given input can lead to an unpredictable output. This makes the task of information security, information assurance, and responsible use for GenAI applications more challenging. In particular, security teams face the dual responsibility of facilitating the deployment of GenAI tools to meet business needs while also ensuring the integrity and security of legacy systems and data.

To achieve this balance, tailored GenAI policy frameworks and governance mechanisms are crucial, setting clear guidelines for GenAI use while maintaining comprehensive security practices that keep pace with the rapidly changing technology.

Key challenges that risk managers face with GenAI include:

  1. Transparency and Accountability: GenAI systems often function as "black boxes," making it difficult to trace how they generate specific outputs. This lack of transparency adds complexity and inherent, potentially irreducible, uncertainty to traditional risk management processes.
  2. Cross-Functional Collaboration: GenAI risk management requires collaboration across legal, compliance, data science, and IT security. Unlike past approaches where each corporate function could operate independently, effective governance now requires a unified approach. This coordination is essential to manage GenAI's impacts on data security, privacy, and ethical/legal compliance.
  3. Regulatory Uncertainty: AI technology continues to outpace regulatory efforts, creating a landscape where few clear guidelines exist, and industry best practices are still nascent. Until formalised standards emerge, organisations must proactively establish internal guidelines to manage AI risks, drawing on voluntary frameworks like NIST's AI Risk Management Framework for initial guidance.
  4. Continuous Evolution: GenAI tools are not static—they continually evolve, often in unexpected directions. This dynamic nature makes it challenging for security teams to keep policies up to date, as evolving capabilities may expose new vulnerabilities. Organisations should build adaptable frameworks that allow for ongoing updates and reassessment of security measures – and provide professional development and outside resources to keep the organisation ahead of the curve.

Rising Impact of Multi-Modal AI
Emerging Multi-modal AI tools–which integrate capabilities across text, images, audio, and video–will only compound the risk management challenge. These tools handle a wider range of data types, increasing the volume and complexity of data to secure and compliance issues to manage. Security and governance teams must, therefore, design policies that address this richer data environment while maintaining controls to secure and ensure the responsible use of these multifaceted interactions.

A Strategic Approach to GenAI Risk Management
To effectively manage the security implications of GenAI, we advise organisations to focus on six core areas:

  • Regulatory & Compliance: Ensure ongoing alignment with emerging regulations, even if many standards are still in development.
  • Technology & Security: Continuously monitor and secure AI tools within the organisation's broader cybersecurity framework.
  • Data & Privacy: Protect sensitive data by enforcing privacy safeguards and establishing strict data-handling practices.
  • Operational Disruption: Minimise potential disruptions by identifying and mitigating risks associated with AI tool integration.
  • Legal: Stay ahead of legal challenges by monitoring AI-related legislation and proactively addressing liability risks.
  • Reputational: Safeguard brand integrity by ensuring AI tools operate ethically and transparently, mitigating risks of bias or misuse.

Preparing for Future GenAI Developments
As GenAI capabilities rapidly expand, organisations that proactively address its unique risks will be best positioned to capitalise on its benefits. SentinelOne's experience with AI-based tools like Purple AI, which leverages large language models to streamline threat detection and response, underscores the need for governance that combines traditional cybersecurity with AI-specific security measures.

Establishing robust, flexible risk management frameworks today will enable organisations to harness GenAI's productivity benefits while maintaining resilience against evolving threats. Those who embrace an adaptive, strategic approach will be better equipped to handle the challenges and opportunities GenAI presents.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X