IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

Study finds employee use of GenAI apps poses privacy risks

Tue, 6th Aug 2024

A new study from Harmonic Security has investigated the use of GenAI applications by employees and has raised concerns about potential privacy risks.

The research, titled 'GenAI Unleashed', surveyed 1,000 enterprise employees who used at least one GenAI app in the past month. The findings reveal how employees are increasingly relying on these tools for various tasks.

The study reports that on average, each user uploads data to 8.25 different GenAI apps every month. Usage varies significantly, with 18.9% of 'power users' engaging with more than 12 apps, while 10% use only one. There has been an 11% decrease in the number of apps used since June, suggesting that employees may be refining their choices and focusing on the most effective tools for their needs.

A substantial 47% of these apps are utilised primarily for content creation, summarising, or editing. The next prominent category is software engineering, accounting for 15% of usage. Other significant applications include data interpretation, processing, and analysis (12%), business and finance (7%), and problem solving and troubleshooting (6%).

Harmonic Security's analysis identified 5,020 GenAI or GenAI-enabled tools currently in use. ChatGPT stands out as the most prevalent tool, utilised by 84% of the surveyed users, making it six times more popular than the next leading app, Google Gemini, which sees 14% usage. Other notable applications include Microsoft Copilot, Perplexity, and Claude. Business tools such as Slack, Notion, and Grammarly, which are used for content creation, summarisation, translation, and support streamlining, constitute 25% of the apps.

Meanwhile, customer service tools account for 13% of the total usage.

However, the study has flagged significant privacy and security concerns associated with the usage of these GenAI apps. It revealed that 30.8% of the 5,020 applications examined declare they train on customer data. This means that any sensitive information uploaded can be used to improve these models. Furthermore, less than 1% of these applications have a Trust Centre, which would provide a summary of crucial security and privacy settings.

Commenting on the findings, Alastair Paterson, co-founder and CEO of Harmonic Security, said, "The study finds that once employees start using GenAI they tend to go all in with a high number of apps being used each month."

"We can probably expect this to fall as the market settles and employees settle on the most useful app for them. However, with a choice of over 5,000 GenAI apps and a high number of average apps used by employees, there are too many out there for IT departments to properly keep track of using existing tools. We particularly urge organisations to pay attention to apps that are training on customer data."

The study underscores the need for businesses to be vigilant about the privacy implications of the GenAI tools their employees are using. The lack of transparency around data training practices and the scarcity of Trust Centres suggest that many GenAI applications may not sufficiently safeguard customer data. As the adoption of GenAI tools continues to grow, organisations are advised to scrutinise the privacy policies of these apps to protect sensitive information effectively.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X