IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image
Fraud experts predict tripled use of AI tools by 2024
Fri, 16th Feb 2024

A new study suggests that 83% of anti-fraud professionals anticipate the incorporation of generative artificial intelligence (AI) into their toolkits within the next two years. The 2024 Anti-Fraud Technology Benchmarking Report, conducted by the Association of Certified Fraud Examiners (ACFE) and SAS, presents insights from nearly 1,200 ACFE members surveyed in late 2023.

A key finding from the report is the heightened interest in AI and machine learning (ML) in fighting fraud. Currently, 18% of anti-fraud professionals employ AI/ML tech to combat fraud, and another 32% plan to implement these technologies over the course of the next two years. If this trend holds, AI/ML use in anti-fraud programs will nearly triple by the end of next year.

However, despite this fervent interest, the growth of AI and ML for fraud detection and prevention has only seen a meagre 5% rise since 2019. This is a far cry from the expected adoption rates noted in the 2019 and 2022 reports which were 25% and 26% respectively.

In contrast to the stalled growth of some data analysis techniques, the application of biometrics and robotics in anti-fraud programs has seen steady growth. Since 2019, the utilisation of physical biometrics has risen 14%, with 40% of respondents now citing it as part of their fraud-fighting arsenal. Also, 20% of those surveyed reported the use of robotics in their anti-fraud programs, a significant jump from 9% in 2019. These technologies found their highest application in the banking and financial sector, where 51% use physical biometrics and 33% use robotics.

John Gill, ACFE President, warned that while generative AI-powered tools are becoming more accessible, they can be dangerous if misused. "Three in five organisations foresee increasing their anti-fraud technology budgets over the next two years. How they invest these funds will determine who will seize the upper hand in what's become a technology arms race with criminal enterprises. It's an uphill battle when you consider that, unlike the fraudsters, organisations face the added challenge of having to use these technologies ethically," he said.

Stu Bradley, Senior Vice President of Risk, Fraud and Compliance Solutions at SAS, mirrored this sentiment, highlighting the complexities of scaling the AI and analytics life cycle. "AI and machine learning aren't simple, plug-and-play applications. However, their benefits can be more readily realised by deploying modularised solutions across the risk management spectrum on a single, AI-powered platform. That’s SAS approach with cloud-native, language-agnostic SAS Viya," he said.

While there's a clear inclination towards the adoption of generative AI in anti-fraud programs, several factors may inhibit this predicted rise, including budget restrictions, data quality, skills gaps, and the ethical dilemma of "could we" versus "should we" in deploying AI technologies.

ACFE Research Director Mason Wilder commented on this stating, "Generative AI has made great strides these last few years, so it's no surprise that organisations are incorporating it into their anti-fraud initiatives. As a society, we are still learning all the advantages and disadvantages to using the technology, but more organisations are beginning to take that first step."