IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
EU pioneers global regulation of artificial intelligence sector
Fri, 15th Mar 2024

The European Union has ushered in a pioneering set of regulations for the artificial intelligence (AI) sector - the first of their kind to be adopted on a global scale. The law, often referred to as the AI Act, sets forth a structured method to oversee the potential risks associated with AI technologies, promising to position the EU as a vanguard in addressing AI-related threats.

A surge in progress and adoption has been witnessed in the AI sector, fuelling notable gains in profitability and productivity. However, this rapid expansion has echos of concern over issues related to bias, privacy, and wider challenges. The AI Act hopes to manage these fears by ranking AI products based on their risk levels and adjusting regulatory scrutiny accordingly. The intent behind the act is a more "human-centric" use of the technology.

The law is designed to manage AI based on its potential harm - the higher the risk posed, the stricter the regulations applied. This approach has led to the creation of provisions specifically aimed at the risks inherent in generative AI tools, such as ChatGPT. The introduction of the AI Act by the EU surpasses the initiatives of both China, with its fragmented AI laws, and the US, with its 2023 executive order mandating AI developers to share data with the government.

Reflecting on the adoption of the AI Act, Enza Iannopollo, principle analyst at Forrester, acknowledged that, "The adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated." She noted that the move leaves every other region, including the UK, to "play catch up". The action by the EU comes despite calls by Jeremy Hunt, Chancellor of the Exchequer, for suitable "guard-rails" in AI regulation and against a backdrop of the UK not planning legislation on similar lines.

Sridhar Iyengar, Managing Director at Zoho Europe, concurred with this sentiment. Highlighting AI as a pivotal tool in widespread business operations, he hailed the EU's decision as a welcome step toward ensuring safer, trustworthy use of AI. He further elaborated: "The EU's framework to mitigate AI risks, coupled with robust business policies to further protect themselves and customers, will allow organisations to have greater agility to react to market trends and better serve customers, all while maintaining a high level of trust."

John Kirk, Deputy CEO of ITG, echoed these sentiments and emphasised the need for expert collaboration in managing the risks of AI. He stated that correctly regulated AI could yield significant benefits across all industry sectors, particularly for global marketing campaigns.

However, some concerns still persist. Elizabeth Anderson, CEO of the Digital Poverty Alliance, pointed out the potential for AI advances to exacerbate an already widening digital divide due to the skills gap. She advocated for robust policy discussions about digital exclusion and urged a multi-stakeholder effort to ensure AI development does not exclude or disadvantage digitally impoverished communities.