IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

UN releases global principles to tackle misinformation online

Thu, 4th Jul 2024

The United Nations has released its Global Principles for Information Integrity, a framework aimed at understanding and curbing the spread of misinformation, disinformation, and hate speech exacerbated by new technologies.

The principles call for transparency, accountability, and fairness among all internet stakeholders, including technology and AI companies, advertisers, the private sector, news media, civil society organisations, and member states. However, there has been criticism over the framework’s lack of specific measures addressing the issue of deepfakes.

The proposals suggest that stakeholders should avoid using, supporting, or amplifying disinformation and hate speech, particularly during crises to ensure information integrity around elections.

Advertisers are encouraged to demand transparency in digital advertising to avoid inadvertently funding disinformation. Media platforms are tasked with the responsibility of protecting users, especially children and other vulnerable groups, from harmful AI-generated content.

Pavel Goldman-Kalaydin, Head of AI at Sumsub, noted: "These principles from the UN are a great first step towards tackling the grave threat of misinformation online - although the framework fails to mention deepfakes.”

He elaborated on the use of AI in age estimation technology to detect minors, acknowledging its imperfections and biases but emphasising its benefits in enhancing age and content controls on social media platforms. “This technology and labelling AI content are essential steps for protecting children from harm and people from misinformation, manipulation, and fraud,” he said.

Summarising the rise of deepfakes, Goldman-Kalaydin pointed out the increasing difficulty in distinguishing between real and AI-generated content. He recommended watermarking AI content as a preventive measure, similar to practices currently implemented by platforms like TikTok. These watermarks could be integrated into the metadata, helping users easily identify AI-generated content.

Natalia Fritzen, AI Policy and Compliance Specialist at Sumsub, expressed concerns about the vagueness of the UN's requirements and their efficacy in addressing the misuse of AI, specifically regarding synthetic content like deepfakes.

“Worryingly, recent data reveals that deepfakes increased 245% year-on-year in Q1 this year,” she stated. She suggested more targeted actions for AI deployers, including removing malicious content from training datasets, forbidding inputs that violate company terms or human rights, and implementing watermarks.

"Watermarks are not a silver bullet, however, and questions remain about their effectiveness - mainly due to issues with technical implementation, accuracy, and robustness," Fritzen added, emphasising the need for standardisation requirements.

Another critical aspect highlighted by the UN principles is the importance of AI literacy. As AI technologies rapidly evolve, there is a growing necessity for public awareness about potential risks, fostering what some experts term “pre-debunking.”

Fritzen commented, “With AI technologies evolving quickly, the population needs to be more and more aware of potential pitfalls in the contents they see.” She further stressed the need for effective regulation crafted in collaboration with private technology businesses to combat AI-related illicit activities, aiming to establish a robust regulatory framework.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X