IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Can AI promote inequality in the workplace?
Fri, 2nd Nov 2018
FYI, this story is more than a year old

Artificial intelligence (AI) is bringing incredible changes to the workplace, yet left unchecked, can emphasise and reinforce gender bias. How is it possible for machines to be sexist?

While it may sound odd that AI could be gender biased, there's plenty of evidence to show that it's happening when organisations don't take the right steps.

Given the spotlight on gender equality through #MeToo and New Zealand's focus to lift gender parity in the workplace, we think it's important to uncover why this bias in AI occurs and what to do about it.

At Accenture, we have set the global goal to have a gender-balanced workforce by 2025.

We know that diversity in the workplace leads to better outcomes and it's time for business to step up and truly address this and create a truly equal workplace.

New Zealand workplaces still suffer from gender inequality. The median wage for women is just under $38,000 and the gender pay gap is 9.4%, according to Statistics New Zealand.

With this in mind, it's worth it for software developers to be fully versed on the potential bias that can be embedded into AI and prevent that from happening.

Showing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by artificially intelligent machines, recent research suggests.

Computer science and psychology experts from Cardiff University and MIT have shown that groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.

There are plenty of examples of gender prejudice in AI; last year media reported on translation websites converting the names of occupations from Turkish and Finnish, gender-neutral languages, to English. While search websites gave male pronouns to professions such as police, engineer and leader, and female pronouns to jobs such as secretary, nanny and nurse.

Another study shows how images that are used to train image-recognition software amplify gender biases.

Two large image collections used for research purposes – including one supported by Microsoft and Facebook – were found to display predictable gender biases in photos of everyday scenes such as sport and cooking.

Images of shopping and washing were linked to women, while coaching and shooting were tied to men.

Another way to find gender bias in machines is to develop algorithms that can find it.

Scientists at Boston University have been working with Microsoft on a concept called word embeddings – sets of data that serve as a kind of computer dictionary used by AI programs.

They have found that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent, according to their paper.

They've combed through hundreds of billions of words from public data, keeping legitimate correlations (man is to king as woman is to queen) and altering ones that are biased (man is to computer programmer as woman is to homemaker), to create an unbiased public data set.

It's important to get the settings right now for AI to ensure that it doesn't further increase the gender divide in the workplace, and instead, supports the movement to an equal workplace.