CBA combats cyber abuse with free AI & machine-learning techniques
Commonwealth Bank (CBA) is on a mission to reduce technology-facilitated abuse worldwide by making its Artificial Intelligence (AI) and machine-learning techniques available to all financial institutions free of charge. This pioneering step aims to help banks identify digital payment transactions that incorporate harassing, threatening or offensive messages, signalling an era of safer banking experiences for customers globally.
CBA's proprietary AI model aids in identifying such instances, which the bank refers to as 'technology-facilitated abuse'. The model's source code will be available through the bank's partnership with global AI leader H2O.ai, marking a significant stride in the bank's bid to reduce cyber abuse. This availability became effective earlier this week.
The AI model operates alongside the bank's automatic block filter, which was introduced in 2020 across its digital banking channels to halt transactions involving abusive language. Since its inception, the filter has successfully blocked almost 1 million abusive, threatening or offensive transactions, thereby safeguarding customers from potential harassment.
Angela MacMillan, Group Customer Advocate at CBA, stated, "Financial abuse occurs when money is used to gain control over a partner and is one of the most powerful ways to keep someone trapped in an abusive relationship. Sadly, we see that perpetrators use all kinds of ways to circumvent existing measures such as using the messaging field to send offensive or threatening messages when making a digital transaction."
MacMillan explained that the technology was developed because some customers were exploiting transaction descriptions to harass or threaten others. "By using this model we can scan unusual transactional activity and identify patterns and instances deemed to be high risk so that the bank can investigate these and take action," she elaborated.
Highlighting the effectiveness of the model, MacMillan further stated, "This AI model detects around 1,500 high-risk cases annually. By sharing our source code and model with any bank in the world, it will help financial institutions have better visibility of technology-facilitated abuse. This can help to inform action the bank may choose to take to help protect customers."
The significant contribution of the CBA AI model is the potential to mitigate digital abuse alongside the automatic filter that blocks transaction descriptions fostering threatening, harassing or abusive environments. Leveraging the potential of AI and machine learning not only helps to identify more insidious forms of abuse but also bolsters the capacity to review and address these instances manually. While the response to this technology has been largely positive, it also reflects society's challenges around technology-facilitated abuse.
The measures established by the bank in combating technology-facilitated abuse include an automatic filter that blocks abusive words in digital payment transactions and the use of AI to identify more subtle forms of abuse. Upon detection, the bank can manually examine these instances and take necessary action to protect its customers
This initiative comes for the bank following their pilot scheme with the NSW Police in which they referred perpetrators of financial abuse to the police – always with the consent of the customer.