A recent study conducted by Iris.ai, which surveyed over 500 corporate research professionals, found that 57% of Artificial Intelligence (AI) users are dissatisfied with the tool they are using. Notably, of those surveyed, 89% employed ChatGPT, the current leading tool in scientific research assistance. This points towards the limitations of AI chat functions in adequately serving the scientific research community. The primary issues noted were inaccuracy (59%), misinformation (46%), and lack of citations (42%).
Despite the major advancements in AI, such as DeepMind's identification of 2 million new materials, it seems these benefits are yet to reach many researchers. This shortfall could be attributed to the concentration on chat facilities provided by AI tools. However, the study reveals researchers demand more than chat from AI – and they want improvements to be made.
Usage of AI far exceeds chat functions alone. It is capable of accelerating research timeframes through various techniques. Yet, less than a quarter of the researchers employed these capabilities. For instance, merely 23% used AI to summarise individual papers and optimise searches, while only 21% employed it to extract knowledge from bodies of research. These essential functionalities aren't commonly found in AI tools purposed for general use.
Victor Botev, CTO and co-founder of Iris.ai, addressed this issue, saying, 'Interacting with a user interface exclusively through a chat function is seen as something of the past. For AI to be truly effective, in science and beyond, we need to prioritise different forms of engagement. Mustafa Suleyman has said that the next stage of generative AI is interactive AI, and I agree. We're already seeing an appetite for this from scientific researchers.'
Botev added, 'It's clear from our study that, whilst researchers recognise the potential of AI, they're still looking for a tool that truly meets their needs. Our research team has developed our own metrics and algorithms to provide accurate, reliable, and comprehensive solutions that not only improve the research process but also enhance trust in AI capabilities. It is an ongoing journey, and we're excited to be at the forefront of this transformation.'
The results demonstrated that for researchers to place more trust in or make more use of generative AI, it would be crucial to enhance certain features such as citing the origin (49%) and quality (46%) of data, providing metrics on correctness and uncertainty (45%), and summarising individual scientific research papers (45%).
The study hails as a valuable insight into the current usage and satisfaction levels with AI tools among scientific researchers, exposing the need for a shift towards interactive AI systems that meet the specific needs of this segment.