Some text here

Supply Chain Knowledge Armory

Get the essentials for understanding software supply chain security

AI model security concerns, threats, risks

What are the risks of DeepSeek-R1's open source AI models?

DeepSeek-R1 is an AI model that someone can run on their own hardware. The security implications of DeepSeek-R1 are not fundamentally different from those of any other AI model, aside from the model's origin. Since DeepSeek-R1 originates from China, a geopolitical adversary of the United States, concerns have been raised about potential biases promoting the CCP’s agenda, censorship of specific information or perspectives, or even the presence of insecure backdoors that could compromise U.S.-based entities using the model. For instance, certain keywords could be embedded to trigger malicious behavior within the model.

DeepSeek-R1 also has a service. Putting your data into the DeepSeek service does have privacy concerns, but they’re similar to any organization's AI service. There are always concerns with sending data over to any third party, and it is not necessarily unique to AI. Similar issues arise with any technology developed outside a nation’s legal jurisdiction, particularly from countries considered adversaries. For example, the U.S. bans on Huawei and ZTE communication equipment stem from comparable security concerns. As DeepSeek is based in China, there can be legal issues depending on data privacy.

As quoted in InformationWeek, January 28, 2025: “All AI models have the same risks that any other software has and should be treated the same way,” Mike Lieberman, Co-Founder and CTO of software supply chain security firm Kusari, says in an email interview. “Generally, AI could have vulnerabilities or malicious behaviors injected … Assuming you’re running AI following reasonable security practices, e.g., sandboxing, the big concerns are that the model is biased or manipulated in some way to respond to prompts inaccurately or maliciously.”

Want to learn more?