IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
How to ensure ethical deployment of AI implementations
Wed, 18th May 2022
FYI, this story is more than a year old

The significant increase in automation and machine technology such as AI and machine learning has undoubtedly unlocked a whole new level of scale and service to organisations.

We likely all expect that one of the advantages of AI is the opportunity to remove human-led biases and improve discrimination against minority groups. However, when managed poorly, AI can further entrench discrimination by embedding bias in its algorithms.

Today, machines routinely decide whether we're eligible for a mortgage or subject to surveillance by law enforcement agencies or insurance companies seeking to crack down on fraud. Their reach even extends to deciding which adverts you see online — including that job advert for a highly paid job role.

There are many organisations where AI in automated systems is not well documented or understood. It's time for automated decision making to step out of the shadows and be held accountable.

When automated decision-making impacts people's lives, either directly or indirectly, and machines can discriminate in harmful ways, organisations must sit up, pay attention and act to ensure AI is implemented as ethically as possible.

First steps

Businesses and government organisations alike should be striving for the highest levels of protection against harm from any machine technology they deploy. At the start of any automation project, organisations must undertake legal, privacy, and ethical impact assessments to confirm the risks are well understood and can be mitigated satisfactorily. This also ensures the most appropriate solution is selected to establish an acceptable risk level while also delivering value.

The sign-off on these assessments should sit with a multi-disciplinary objective review panel with the power of veto over any problematic aspects of a project, including the manner of deployment, the level of automation and the opportunity for recourse. The deployment must be a collaborative process between the data/technology teams and the business leadership team to operationalise best in practice ethics within data and analytics.

Deployment

There is some strong advice for good practices for designing and implementing machine technology outlined in the ombudsman's report. Still, we believe all organisations are obliged to consider the following best practices as a minimum:

  • The ethical considerations of fairness, transparency, non-maleficence, privacy, respect for autonomy and accountability dictate that any organisation implementing any machine technology must ensure it performs to the highest levels of accuracy for all affected groups;
  • That there is a mechanism to explain any decisions based on the output of a model or system; 
  • That there are processes to detect - mitigate harmful outcomes
  • That people can give informed consent to participate in the process
  • That there are mechanisms to contest any outcomes perceived as unjust. 

The development and deployment of any machine technology should be iterative, starting with an ethical review of the accuracy against historical data, ensuring performance is consistent across the sample population. If there are groups for which the performance is substantively worse, more data must be sought to ensure adequate representation for all groups.

Where the risk of harmful consequences is identified, deployment should be similarly iterative and cautious, starting with human-in-the-loop solutions to ensure human oversight while confidence is gained in the performance of the model or system.

This is not to say that the human decision-making process is infallible; it merely offers an opportunity to understand and interrogate outputs before deployment. This process should be done with the most trusted operators to reduce the likelihood of human bias being reintroduced to the process. Additionally, everybody involved in the process should have undertaken unconscious bias training.

Once in production, any machine technology's ongoing accuracy and performance must be measured and monitored on an ongoing basis. This performance should be reportable and visible across the organisation, alongside existing KPIs.

Review

Any organisation implementing algorithmic decision making needs to have an objective ethical review process encompassing both quantitative and qualitative considerations. Model performance should be monitored against these ethical metrics to understand any anomalies in performance for minority groups as well as any change in performance over time. The model can then be adapted and adjusted on an ongoing basis as part of the operational process.

While the implementation may seem daunting, organisations must uplift their understanding and operationalisation of ethical considerations in their AI and machine learning projects. Businesses should be adopting a 'question - review - measure – improve' approach to managing the performance and impact of their automated decision making to ensure ethical outcomes.