itb-nz logo
Story image

Europe contemplates new rules for AI – and what this might mean in A/NZ

31 Aug 2020

Article by SAS Europe and Asia Pacific chief privacy strategist Kalliopi Spryridaki.

At the beginning of 2021, the European Commission will propose legislation on AI that will be, at first instance, horizontal (as opposed to sectoral) and risk-based, with mandatory requirements for high-risk AI applications. 

The new rules will aim at ensuring transparency, accountability and consumer protection, including safety, through robust AI governance and data quality requirements. 

Europe’s approach to regulating technology is based on the precautionary principle, which enables rapid regulatory intervention in the face of possible danger to human, animal or plant health, or to protect the environment. This perspective has helped Europe to become a global leader in the shaping of the digital technology market. 

Particularly, with the introduction of the General Data Protection Regulation (GDPR) in 2018, Europe considers it has gained a competitive advantage through the creation of a trust mark for increased privacy protection. 

How will Australia and New Zealand be impacted?

Australia and New Zealand have a close relationship with the European Union (EU) and its member countries historically. They share a commitment to democracy, the rule of law and a respect for human rights. Not surprisingly, the ongoing discussions on AI ethics reflect similar concerns and objectives. 

When Europe legislates on AI, it is inevitable that Australia and New Zealand will be impacted. Europe’s strong trading partners with shared values will want to benefit from trusted AI developed in Europe. 

Equally, AI developed in Australia or New Zealand should cross national borders without burdensome obligations, particularly for smaller providers. This requires similar rules on AI development and use. 

Considering the breadth of AI technologies and applications across all sectors, Europe has embarked on a truly challenging venture. If AI is defined broadly, the law risks becoming unnecessarily burdensome without bringing benefits for many AI applications where trust is not relevant, e.g. industrial applications. 

A narrow definition of AI, on the other hand, may not provide futureproof protection given the pace of technological evolution. 

Other challenges relate to the type and level of risk that should be regulated. The new AI rules will need to target legal gaps related to the risk that AI applications may pose to physical safety. These gaps may, for example, exist in rules on liability and compensation rules. 

Another type of risk to be addressed by AI legislation concerns human rights, for instance, including discrimination and privacy protection. The identification of the criteria for ‘high’ risk AI applications will set a crucial threshold for the application of the new AI law. 

Europe’s new AI rules are also expected to require an ex-ante conformity assessment before a product or service is placed on the market. 

Because AI systems are not “static” products, repeated assessments may be necessary to manage compliance over the AI system’s lifetime. 

Options related to compliance and enforcement mechanisms range from the creation of a new AI regulator to the introduction of certification bodies, accreditation schemes and training programs for the testers. 

The associated costs for European AI developers may be outweighed by job creation and a new demand for skills that would enhance Europe’s global competitiveness.

The long list of issues in this regulatory debate include whether the public use of biometric identification systems, including facial recognition, should be restricted unless “allowed by law” or whether the use of AI-enabled lethal autonomous weapons systems should be banned. 

Therefore, Europe’s new AI rules have the potential to set a global standard, at least for Europe’s trading partners. Interestingly, both Australia and New Zealand have initiatives in place that may put a spin to a potential spiral effect of Europe’s new AI rules.

Australia and the CDR

Australia recently introduced the Consumer Data Right (CDR). The CDR, which applies initially to the banking sector, aims at improving consumers’ ability to compare and switch between products and services. 

The introduction of the CDR is a unique approach addressing a challenge that many regulators globally strive to tackle. Namely, the CDR “expands” personal data protections beyond the right to privacy. 

It focuses on consumer protection and encourages competition between service providers, thus creating further market benefits through better prices and product innovation.

The CDR does not go as far as to create a data ownership right for individuals. But it empowers individuals to manage their data. 

Given that data is the backbone of AI, the CDR will fundamentally shape Australia’s contribution to the global regulatory efforts on AI. 

New Zealand reimagines regulation for the AI age 

‘Reimaging Regulation for the Age of AI’ is a pilot project by the World Economic Forum (WEF) in partnership with the Government of New Zealand that aims at designing actionable governance frameworks for AI regulation. 

The Government responded to the complicated endeavour of regulating AI with this project that adopts an evidence-based, methodological approach analysing existing regulatory tools and potential policy options. 

The Government will pilot the suggested AI regulation frameworks to offer an understanding of what works and why. The openness of this project not only brings a global perspective to New Zealand but also provides an insightful and influential analysis to legislators globally. 

Story image
Sisense finds 55% of A/NZ firms use analytics more than they did pre-pandemic
The pandemic has propelled the maturity of BI and data & analytics within A/NZ businesses, a new report from Sisense has revealed. More
Link image
Webinar: How to stay agile in the face of changing cloud needs for customers
Join the discussion about how to migrate to cloud, adopt AI and bolster CX - and leave with the know-how to advance your business in 2020 and beyond.More
Story image
Analysis: Why, how and when to move to the cloud
In this era of the enterprise, cloud is the mother of business reinvention. But there are some critical considerations to ponder before the move is made.More
Story image
Security training and tech: Empowering staff in a hybrid work environment
As employees travel back and forth between home and the workplace, are they walking through the door with cyber threats sitting on their devices?More
Link image
Track, analyse, act: The e-commerce metrics you need
E-commerce technology leaders need to track, analyze, and act on large volumes of business and system performance data. Danny Miles, the CTO of Dollar Shave Club, shares a powerful framework for thinking about and prioritizing e-commerce metrics.More
Link image
<span class="coloured">Unleash the intelligent way to automate at Pega Discover – Intelligent Automation</span>
Find out how the world’s largest brands are accelerating business and simplifying systems in this two-hour, interactive virtual event. By the end you’ll be primed to start getting business done smarter and faster while scaling toward your biggest business goals. Register Now.More