Gartner debunks common AI misconceptions
IT and business leaders are often confused about what artificial intelligence (AI) can do for their organisations and are challenged by several AI misconceptions. Gartner says IT and business leaders developing AI projects must separate reality from myths to devise their future strategies.
Gartner research VP Alexander Linden says, "With AI technology making its way into the organisation, it is crucial that business and IT leaders fully understand how AI can create value for their business and where its limitations lie.
"AI technologies can only deliver value if they are part of the organisation's strategy and used in the right way.
Gartner has identified five common myths and misconceptions about AI.
Myth No.1: AI works the same way the human brain does
AI is a computer engineering discipline. In its current state, it consists of software tools aimed at solving problems. While some forms of AI might give the impression of being clever, it would be unrealistic to think that current AI is similar or equivalent to human intelligence.
"Some forms of machine learning (ML) – a category of AI - may have been inspired by the human brain, but they are not equivalent," Linden said.
"Image recognition technology, for example, is more accurate than most humans but is of no use when it comes to solving a math problem. The rule with AI today is that it solves one task exceedingly well, but if the conditions of the task change only a bit, it fails.
Myth No. 2: Intelligent machines learn on their own
Human intervention is required to develop an AI-based machine or system. The involvement may come from experienced human data scientists who are executing tasks such as framing the problem, preparing the data, determining appropriate datasets, removing potential bias in the training data (see myth No. 3) and – most importantly- continually updating the software to enable the integration of new knowledge and data into the next learning cycle.
Myth No. 3: AI can be free of bias
Every AI technology is based on data, rules and other kinds of input from human experts. Similar to humans, AI is also intrinsically biased in one way or the other.
"Today, there is no way to completely banish bias, however, we have to try to reduce it to a minimum," Linden said.
"In addition to technological solutions, such as diverse datasets, it is also crucial to ensure diversity in the teams working with the AI and have team members review each other's work. This simple process can significantly reduce selection and confirmation bias.
Myth No. 4: AI will only replace repetitive jobs that don't require advanced degrees
AI enables businesses to make more accurate decisions via predictions, classifications and clustering. These abilities have allowed AI-based solutions to replace mundane tasks, but also augment remaining complex tasks.
An example is the use of imaging AI in healthcare. A chest X-ray application based on AI can detect diseases faster than radiologists. In the financial and insurance industry, robo-advisors are being used for wealth management or fraud detection. Those capabilities don't eliminate human involvement in those tasks but will rather have humans deal with unusual cases.
With the advancement of AI in the workplace, business and IT leaders should adjust job profiles and capacity planning as well as offer retraining options for existing staff.
Myth No. 5: Not every business needs an AI strategy
Every organisation should consider the potential impact of AI on its strategy and investigate how this technology can be applied to the organisation's business problems. In many ways, avoiding AI exploitation is the same as giving up the next phase of automation, which ultimately could place organisations at a competitive disadvantage.
"Even if the current strategy is 'no AI', this should be a conscious decision based on research and consideration. And, like every other strategy, it should be periodically revisited and changed according to the organisation's needs. AI might be needed sooner than expected," Linden concluded.