IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

Kaspersky warns of AI's growing use in sophisticated attacks

Wed, 14th Aug 2024

Kaspersky has highlighted several potential offensive applications of AI by cybercriminals and the necessity for developing proactive cybersecurity defences. The increasing accessibility of advanced AI systems has not only led to widespread usage by individuals and businesses but has also opened avenues for threat actors to leverage these technologies for sophisticated attacks.

The use of AI for malicious purposes has been observed in several areas. One notable instance is the employment of AI tools like ChatGPT to write malicious software and automate attacks against multiple users. In addition, AI programs are capable of logging users' smartphone inputs by analysing acceleration sensor data, potentially capturing sensitive information such as messages, passwords, and bank codes. Another concern is the ability of AI-driven swarm intelligence to operate autonomous botnets that can communicate with each other to restore malicious networks after experiencing damage.

Kaspersky's recent research delved into the use of AI for password cracking, a cybersecurity concern that has become increasingly pertinent. Alexey Antonov, Lead Data Scientist at Kaspersky, explained the implications of a massive data leak published online in July 2024 that contained approximately 10 billion lines with 8.2 billion unique passwords. Antonov remarked, "We analysed this massive data leak and found that 32% of user passwords are not strong enough and can be reverted from encrypted hash form using a simple brute-force algorithm and a modern GPU 4090 in less than 60 minutes." He further revealed that when a language model was trained on the password database and used for checking passwords, 78% could be cracked using the AI method, which was about three times faster than traditional brute-force algorithms. Only 7% of the analysed passwords were strong enough to resist a long-term attack.

AI also poses challenges in the realm of social engineering. Cybercriminals can use AI to generate credible-looking content, including text, images, audio, and video. Large language models like ChatGPT-4 can be exploited to create sophisticated phishing messages that overcome language barriers and craft personalised emails based on users’ social media information. These phishing attacks can mimic specific writing styles, rendering them harder to detect. Deepfakes represent another significant cybersecurity challenge. AI-generated deepfakes have been utilised in celebrity impersonation scams, leading to substantial financial losses. In one of the most elaborate attacks recorded, scammers in Hong Kong used deepfakes to impersonate company executives in a video conference call, successfully convincing a finance worker to transfer approximately USD $25 million.

Beyond AI’s use for harmful purposes, adversaries can also target AI algorithms themselves, with attacks such as prompt injection and adversarial attacks. Prompt injection attacks involve attackers creating requests that bypass previous prompt restrictions on large language models. Adversarial attacks on machine learning algorithms involve hidden information in images or audio that can confuse AI systems and force incorrect decisions. With AI becoming more integrated into daily life through products like Apple Intelligence, Google Gemini, and Microsoft Copilot, addressing AI vulnerabilities is increasingly urgent.

Kaspersky has been utilising AI technologies to protect its customers for many years. The company employs various AI models to detect threats and continuously researches AI vulnerabilities to enhance the resilience of its technologies. Moreover, Kaspersky actively studies different harmful techniques to provide reliable protection against offensive AI actions.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X