Why AI algorithms are only as good as the data they’re fed - Snowflake
Article by Snowflake Asia Pacific and Japan vice president Peter O’Connor
Artificial intelligence (AI) has shifted from being a computing novelty into mainstream use much faster than many people expected.
Once the domain of researchers and computer scientists, it’s now powering everything from personal assistants to aircraft and business accounting systems.
However, it must be remembered that AI systems are only as good as the data being fed into them.
If this data is of poor quality or contains bias, this will have a significant impact on subsequent decisions made by the software.
There’s no question that AI has the potential to address some of society’s biggest problems. Yet, time after time, AI-powered projects have failed the ‘fairness’ test as algorithms have amplified gender, ethnic or social biases that lurk within the data sets being used.Guiding models
For this reason, it’s vital that organisations collating large data sets develop robust models and ethics that encourage transparency, equality, and trust.
This is fundamental to the task of delivering AI systems that produce fair and balanced outcomes.
If these models and guidelines are not in place, biased data sets will never be challenged.
A particular data set might be tainted by the conscious or unconscious biases of the people who collated it, but we would never know.
There is also a tendency to overestimate the limits of the current generation of AI tools.
There have been cases where car drivers have been lulled into a false sense of security by vehicle assistance systems and fail to take back control when something unexpected happens.
Usage of AI tools in the legal sector is also causing some issues.
A number of judges now seek recommendations from an AI tool before ruling in cases involving things such as bail or parole.
Although AI-provided results are supposed to be only recommendations, it can be tempting to take them as a definitive result.
In the wider community, many people are becoming increasingly uncomfortable with AI and the rate at which it is being developed and adopted.
High-profile businessman Elon Musk gained headlines when he described the technology as society’s “biggest existential threat”.
Indeed, in a recent consumer sentiment survey commissioned by Snowflake, 62% of respondents said AI tools should not be used to make decisions in areas such as criminal justice and healthcare.
For AI to become a trusted and valuable contributor to society, the organisations making use of it must focus on fairness and transparency from the outset.
Any AI algorithms that can’t be clearly explained and independently assessed should not be trusted with critical decisions.
It might take some extra effort, but tools that give poor results should be tuned to remove any inherent bias.
For example, a facial recognition program can be further trained so that it gives equal weight to both sexes and all skin colours.
Of greatest importance, however, is that organisations focus attention on the data being used. They must have clear guidelines in place about how this data is collected and how it is put to use.
There are two key areas in which attention needs to be focused.
Firstly, all organisations need to have a stated data governance policy that dictates how data is handled.
This is particularly important in an era when cloud-based storage is increasingly used and data sets are regularly combined in different ways.
Second, the workings of the AI tools being used need to be able to be explained.
While machine learning algorithms such as deep learning neural networks might be very powerful, any lack of transparency or understanding may cause regulatory and ethical risks.
To achieve this, it’s worth considering partnering with risk and compliance professionals early in the tool development process.
It’s clear that AI is becoming an increasingly powerful force in many areas of modern society, and its development is happening at an exponential rate.
Any failure to get the required approaches and frameworks in place now could risk unforeseen outcomes in the future.
By taking the time to understand the importance of both data quality and the transparency of algorithms, society will be best placed to reap the significant benefits that AI can deliver.