Trust, truth, and ethics – AI’s role in the media
The media was spotlighted at an event last week in Auckland, attended by nearly 200 people discussing the rapid rise of artificial intelligence and its influence on society.
Chaired by NewZealand.ai founder Justin Flitter, the aim of the event was to continue to spur discussion about the increasing use of AI in the media with views presented and questions posed by the panel of six experts as well as the audience.
The panel from a range of industries including:
- Microsoft principal software engineer Nigel Parker
- AI Forum NZ Chair Member Stu Christie
- Department of Post Operations Head Katie Hinsen
- Hudson Gavin Martin principal Anchali Anandanayagam
- Digital storyteller Cassie Roma
- IBM artificial intelligence leader Issy Fernando
“It was a lively and informative debate and revealed the level of interest among a very diverse group of people about the potential of AI,” says Justin Flitter.
“What’s obvious to anyone coming to this and other AI events is the range of people attending, from all sorts of backgrounds, countries of origin and ethnicities, and a wide age range.”
One of the guests was Xero writer and content strategist Josh Drummond, who presented his experiments with AI bots resulting in an opinion piece for Mike Hosking, showing just how easy it is to produce fake media.
While realistic, and humourous, the conclusion drawn was AI is not at the stage when it can take the place of writers, or popular presenters, but is at the stage that it can support.
His views were supported by Katie Hinsen who highlighted the importance of journalism as a skill and the implications if this was lost.
Cassie Roma also felt a balance was important between the human touch versus the ability of a bot to produce, especially when it comes to writing entertaining copy or “whacky shit” that a machine could never produce, well not yet at least.
Ethics was also raised as a question with Nigel Parker saying AI is birthing something that brings up huge questions and suggests people need to know if they are liaising with a bot as opposed to a person.
A legal perspective was put forward by Anchali Anandanayagam that the law needs to keep up with the rapid changes.
She highlighted that guidelines as well as copyright, needs revision to maintain currency.
“While many questions were raised the overall mood was AI is a force for good not just to support the media but in other areas too,” Flitter says.
“There were many examples of good being facilitated and created by AI, where it hasn’t been possible before.”
One example highlighted by the panel is using AI to speed up the diagnosis process for tuberculosis in rural locations in Asia by reducing the need for an expert’s involvement on-site that would often take too long and threaten lives.
Another example highlighted by Issy Fernando is working with mole-mapping in children’s hospitals to reduce the risk of skin cancer, the creation of a bot to help people learn Te Reo Maori, and one to improve guide dog training.
While AI was agreed to be a force for good, agnostic and able to be used to strengthen humanity, everyone also felt trust was imperative and people should know they are interacting with a bot.
“Our approach to innovation is being challenged by AI, which is already having significant impacts in many industry sectors, media is just one of them,” says Flitter.
“The future is coming at us at a ferocious rate and we all need to discuss, understand and then adapt. This is why these conversations are important as a way to start and foster a very important conversation for our society and economy.
“As for media, human involvement remains essential and welcomed, but people need to be aware of the potential for fake news and should be informed whether or not a bot is involved. If this doesn’t happen the effect may be a flight back to traditional media - time will tell.”