IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Pega strives to rid AI-based CX systems of discrimination
Thu, 21st May 2020
FYI, this story is more than a year old

Pegasystems (Pega) wants to do its part to help remove hidden biases from artificial intelligence (AI) that drives customer experience.

This week, the company launched a new Ethical Bias Check as part of its Pega Customer Decision Hub. The check is designed to flag possible discriminatory messages created by AI before those messages reach the customer.

As AI is being embedded in almost every aspect of customer engagement, certain high-profile incidents have made businesses increasingly aware of the risk of unintentional bias and its painful effect on customers,” explains Pegasystems vice president decisioning - analytics, Rob Walker.

AI models can unintentionally learn biases related to age, ethnicity, or gender. This could lead to more harmful discrimination like offer fewer loans, insurance policies, or product discounts to underserved populations.

However, AI can add more value to customer interaction if it's done right – benefits include beeper brand loyalty.

Pegasystems states that businesses cannot afford to lose customer trust, particularly as the world recovers from the pandemic.

The Ethical Bias Check claims to detect unwanted discrimination by using predictive analytics to simulate the likely outcomes of a given strategy.

The Ethical Bias Check features an AI “brain,” which screens all AI decisions for bias – including which marketing offers to display on the web, which promotions to deliver in an email, or what service suggestions to make for each customer.

After setting their testing thresholds, clients receive alerts when the bias risk reaches unacceptable levels – such as if the audience for an offer skews toward or away from specific demographics.

The check's bias detection reports help clients understand why and where the issues might arise so they can correct the issues before they become problems.

Operations teams can then pinpoint the offending algorithm and adjust the strategy to help ensure a fair and more balanced outcome.

Companies can set acceptable thresholds for an element that could cause bias, such as age, gender, or ethnicity. Businesses can adjust these thresholds for scenarios where slanted outcomes may be justified.

Companies can also widen or narrow the thresholds to account for the desired outcomes of their engagement strategies.

Companies can include bias testing as a standard course of action when simulating strategy results.

Pegasystems has previously introduced other features and functions in Pega Customer Decision Hub aimed at helping clients act responsibly with AI. This includes the T-Switch, which gives organisations control over the transparency of their artificial intelligence (AI) customer engagement models.