A new study reveals that DeepSeek’s free-to-use open-source AI models could be used by cybercriminals to generate dangerous malware. These models, which come with weak guardrails, may lead to significant security threats, researchers warn.
Governments and regulators have raised alarms over the potential for large language models (LLMs) like ChatGPT and Gemini to be used maliciously, and this new research highlights a growing concern. Although some LLMs designed for criminal use require payment, the freely accessible DeepSeek R1 model presents a serious risk for cybercriminals.
The DeepSeek R1 model, a reasoning large language model (LLM) created by the Chinese company DeepSeek, was found to be capable of generating basic malware structures. According to the research by Nick Miles of Tenable Research, the model’s guardrails were described as "trivial to work around" and vulnerable to various jailbreaking techniques.
In one experiment, Miles used DeepSeek to attempt the creation of a keylogger—a type of malware that records keystrokes from a user’s device. Initially, the model refused to assist, but it was easily convinced by telling it the task was "for educational purposes only."
After overcoming this barrier, DeepSeek provided step-by-step instructions for creating a keylogger. While some manual rewrites of the model’s code were required, the keylogger ultimately worked.
Miles also attempted to generate a simple ransomware sample—a type of malware that locks a user’s files and demands a ransom. As with the keylogger, DeepSeek’s guardrails warned against creating such malicious code, but after some back and forth, Miles successfully generated several ransomware samples. Again, manual editing was needed to make the ransomware functional.
The key takeaway from the research is that, with simple manipulation, bad actors could bypass DeepSeek’s restrictions and use it to create dangerous malware.
Despite the model’s flaws, the findings are not entirely catastrophic. The malware created through DeepSeek required a certain level of coding expertise, meaning it wouldn’t be easily accessible to someone with no technical knowledge.
Nick Miles points out that DeepSeek still serves as a useful tool for cybercriminals looking to quickly familiarize themselves with malicious coding techniques.
“This research suggests that DeepSeek is likely to fuel the development of malicious AI-generated code in the near future,” Miles concludes.
The findings underscore the need for better safeguards in open-source AI models, particularly those that can be freely accessed by the public. While other LLM providers have implemented stricter guardrails, DeepSeek’s lack of robust security measures raises concerns about the potential misuse of its technology.
DeepSeek is a Chinese tech company founded in July 2023 by Liang Wenfeng, specializing in the development of artificial intelligence (AI) models, particularly large language models (LLMs). The company is known for offering open-source AI resources, enabling developers and researchers to access and experiment with its advanced technologies.
Despite its contributions to the AI field, DeepSeek’s open-source models have raised concerns due to vulnerabilities in their security and a lack of robust guardrails. These weaknesses have made the models susceptible to potential misuse, such as the creation of malware.