IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

Google report unlocks AI's power for social good

Mon, 16th Sep 2019
FYI, this story is more than a year old

Google's AI Impact Challenge launched less than a year ago in October 2018 and since then, it has attracted more than 2600 applications from all over the world – including those that have had no prior experience working with the power of artificial intelligence (AI).

The AI Impact Challenge is designed to inspire organisations around the world to submit their ideas about how they could use AI to address society's pressing challenges.

Now Google has documented some of those results in its Accelerating Social Good with Artificial Intelligence report, which shares insights and proposed ideas that could save people and the planet.

Google.org head of product impact Brigitte Hoyer Gosselink and Google AI program lead Carla Bromberg explain that the project attracted 2602 applications, from which 55% of not-for-profits and 40% of for-profit organisations reported that they have no prior experience with AI.

Most applications proposed to draw from open source machine learning libraries that bridge the gap between academia and mainstream use.

"AI is becoming more accessible as new machine learning libraries and other open source tools, such as Tensorflow and ML Kit, reduce the technical expertise required to implement AI. Organisations no longer need someone with a deep background in AI, and they don't have to start from scratch," note Gosselink and Bromberg.

They also observed that organisations are trying to tackle similar problems, often with similar approaches.

"For example, we received more than 30 applications proposing to use AI to identify and manage agricultural pests," they state, adding that the report could encourage people to collaborate and share resources to solve problems.

Other projects include the application of machine learning to vaccine data that predicts viability at every point in the supply chain, and the application of natural language processing to analyse chat transcripts from crisis call helplines.

The Google AI Impact Challenge awarded 20 recipients a combined US$25 million in funding. Winning organisations come from around the world, including places such as Australia, the United States, Indonesia, the United Kingdom, Colombia, Brazil, Uganda, and Switzerland.

The report outlines seven insights from the application review:

Machine learning is not always the right answer

Some submitted proposals would be faster, simpler, and cheaper without ML. In other cases, ML was found not to be sophisticated enough to tackle problems.

Data accessibility challenges vary by sector

Data needs to be reliable and meaningful, but access to that data can be difficult, depending on the sector.

For example, those in health and education have better access to data than those in crisis response or equality and inclusion. However, all sectors face challenges including privacy, partnerships for data access or collection, and the ability to collect data from first-party sources.

Demand for technical talent has expanded from specialised AI expertise to data and engineering expertise

Most proposals rely on existing AI frameworks, which means there is a reduced requirement for AI specialists. However, engineers still need enough AI expertise to put data to use.

This could be improved by funding organisations, partnerships with AI experts, and training grants.

Transforming AI insights into real-world social impact requires advance planning

It could be a challenge to operationalise share AI models with real-world programs while work is in progress. Projects would be better if they had a path towards implementation and feedback from potential users early in the program.

Most projects require partnerships to access both technical ability and sector expertise

Partnerships are essential to benefit social good, however all partners may have different goals and work cultures.

Many organisations are working on similar projects and could benefit from shared resources

Resource-constrained projects would benefit from a shared pool of knowledge and resources; however some organisations may not be willing to share data.

Organisations can share their own knowledge across multiple stages of AI implementation; incentivise other organisations with knowledge to open source; and ensure that there are systems in place to make shared knowledge easily accessible for organisations it could benefit.

Organisations want to prioritise responsibility but don't know how

Some AI applications did not show much understanding of AI responsibility, and potential risks.

"We found that many organisations needed to more carefully consider risks related to creating or reinforcing unfair bias, incorporating privacy design principles, and mitigating risk of harmful use or misuse," the report says.

Google's AI principles state that AI applications must be socially beneficial; avoid creating or reinforcing unfair bias; be built and tested for safety; be accountable to people; incorporate privacy design principles; uphold high standards of scientific excellence; and they must be made available for uses that accord with these principles.

"Global momentum around AI for social good is growing—and many organisations are already using AI to address a wide array of societal challenges. As more social sector organisations recognise AI's potential, we all have a role to play in supporting their work for a better world," conclude Gosselink and Bromberg.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X