Global report reveals mixed attitudes towards AI in healthcare
A recent report published by Elsevier has revealed significant differences in attitudes towards artificial intelligence (AI) among researchers and clinicians in various countries, with a particular focus on the United States, China, and India. The 'Insights 2024: Attitudes toward AI' report, based on a survey of 3,000 researchers and clinicians across 123 countries, offers a comprehensive look at how AI is perceived and used in the realm of research and healthcare.
The report underscores a growing recognition of AI's potential to accelerate knowledge discovery, enhance work quality, and reduce costs. Among those surveyed, 94% of researchers and 96% of clinicians believe AI will significantly aid in accelerating knowledge discovery. Additionally, 92% of researchers and 96% of clinicians think AI will rapidly increase the volume of scholarly and medical research.
However, the adoption rates of AI differ markedly from country to country. The survey shows that 54% of those familiar with AI have actively used it, with specific work-related usage accounting for 31%. This usage rate is notably higher in China at 39% and lower in India at 22%. In the United States, only 11% of respondents consider themselves very familiar with AI or use it frequently in their work.
The expectation for future use also varies by region. While 67% of those who haven't yet used AI expect to do so within two to five years, China leads this anticipation with 83%, followed by India at 79%, and the United States at 53%. This indicates that both China and India are more optimistic about integrating AI into their work compared to their American counterparts.
The sentiment about AI's future impact on work also differs significantly. According to the report, 27% of US respondents are less likely to feel positive about AI's future influence on their work, compared to 46% in China and 45% in India. Such findings highlight a more cautious approach towards AI in the US compared to the more enthusiastic responses from China and India.
Despite the general willingness to adopt AI, both researchers and clinicians expressed significant concerns about misinformation and errors. 95% of researchers and 93% of clinicians believe AI could be used to spread misinformation. Similarly, 86% of researchers and 85% of clinicians fear that AI could cause critical errors, with many voicing concerns over the potential erosion of critical thinking skills.
Transparency and trust in AI tools emerge as critical factors for their acceptance. Among the respondents, 81% expect to be informed whether the tools they are using depend on generative AI. Additionally, 71% anticipate that generative AI tools' results should be based on high-quality, trusted sources. Furthermore, 78% of researchers and 80% of clinicians expect to be informed if peer-review recommendations about manuscripts utilise generative AI.
Kieran West, Executive Vice President of Strategy at Elsevier, commented on the findings: "AI has the potential to transform many aspects of our lives, including research, innovation and healthcare, all vital drivers of societal progress. As it becomes more integrated into our everyday lives and continues to advance at a rapid pace, its adoption is expected to rise. Researchers and clinicians worldwide are telling us they have an appetite for adoption to aid their profession and work, but not at the cost of ethics, transparency and accuracy. They have indicated that high quality, verified information, responsible development and transparency are paramount to building trust in AI tools, and alleviating concerns over misinformation and inaccuracy. This report suggests some steps that need to be taken to build confidence and usage in the AI tools of today and tomorrow."