![Story image](https://itbrief.co.nz/uploads/story/2025/02/13/techday_f63a03e03780777f1aab.webp)
DeepSeek faces bans amid security & data privacy woes
DeepSeek, a rapidly emerging player in the artificial intelligence (AI) sector, is facing mounting scrutiny over its security practices and data management policies.
SecurityScorecard's recent STRIKE research highlights a series of vulnerabilities and data privacy concerns associated with the platform, which have led to a spate of bans in various regions.
Corian Kennedy, Senior Manager of Threat Insights & Attribution at SecurityScorecard, delineated several critical security flaws in DeepSeek's software architecture. These include hardcoded encryption keys, weak cryptographic algorithms, and SQL injection risks, which potentially expose user data to exploitation.
Additionally, the app reportedly gathers comprehensive user data, storing it on servers based in China, which raises alarms about possible government access due to the country's stringent data regulations.
DeepSeek has also been noted for transmitting user data to domains affiliated with Chinese state-owned entities, as well as ByteDance.
This has fueled concerns over data sovereignty and government surveillance. Despite its claims of transparency, the app employs anti-debugging techniques making it resistant to thorough security assessments, a move that critics argue could obscure further vulnerabilities.
The platform's privacy policy also lacks clear guidelines on data sharing, raising red flags about third-party access. This has prompted regulatory measures, including restrictions or bans from authorities in Italy, Australia, the United States Navy, and countries across the Asia Pacific region.
Security concerns extend beyond just technical vulnerabilities, touching upon broader implications for data sovereignty and business practices.
Steve Tzortzidis, Director of Data & AI at V2, cautions against Australian companies hastily adopting DeepSeek's technologies without fully understanding the associated risks.
He emphasises the potential political motivations behind DeepSeek's operations, especially considering its ties to Chinese data storage facilities.
Tzortzidis underscores that the allure of DeepSeek's performance and affordability compared to its Silicon Valley counterparts should not overshadow the importance of maintaining data sovereignty and safeguarding sensitive information. V2 advocates for careful scrutiny of privacy policies, noting that DeepSeek's data, stored overseas, could be utilised to train AI models without user consent.
On a broader scale, Jennifer Cheng, Director of Cybersecurity Strategy for Asia Pacific and Japan at Proofpoint, acknowledges both the benefits and risks associated with generative AI platforms like DeepSeek.
Cheng highlights the dual-edged nature of such platforms, which offer innovative potential but also pose significant risks of data leakage if not managed with robust cybersecurity strategies.
Proofpoint's recent findings indicate that over half of Singapore's Chief Information Security Officers (CISOs) perceive generative AI tools as top organisational risks.
Cheng suggests implementing human-centric cybersecurity frameworks to navigate these challenges effectively. Such frameworks should prioritise transparency, fairness, and accountability in data handling, while empowering employees through consistent education on cyber threats.
The call for vigilance is clear as businesses navigate the promising yet perilous landscape of AI technology.
While DeepSeek continues to make technological advances, the security and data privacy concerns necessitate rigorous scrutiny by organisations considering its integration into their systems.