The abilities of generative and agentic AI models require a proactive approach to protecting the AI supply chain.
March 25, 2025
Artificial intelligence (AI) models are a major part of technology right now. While the industry has used AI in analysis for years — like machine learning for fraud detection — the rise of generative AI to create content and agentic AI to take external action presents the opportunity to change how businesses work. But AI models add additional complexity to supply chain security; they involve both software as well as the training data. Bad data will give you bad results. If you’re not already securing your AI, CSO Online says this is the year to start.
The perils of Artificial Intelligence (AI) software supply chains mirror those of the broader software landscape, with some added intricacies. Traditional software supply chains are concerned with software. AI supply chains add the complication of the dataset used to train the model. The same model with two different sets of training data can produce dramatically different output. Does the model have the input necessary to give reasonable output, or will the model tell you to put glue on pizza?
Before you can start securing your AI, need policies for how AI can be used. You might be fine with developers using an AI coding assistant but not with HR using AI to make promotion decisions. Next, you have to know where AI is used within the organization. Does the AI usage meet your organizational requirements?
At the heart of securing your AI is understanding the provenance of the software and data. What software goes into the model? What data is used to train it? Where does that data come from? You want to make sure that the training data does not perpetuate biases or cause the model to produce false information. You might not have the ability to inspect the entire dataset, but you should at least get your models from trusted sources that provide attestations about what the models contain.
Ideally, you should only use truly open source AI models — that is to say models open to inspection, modification, and redistribution, as well as an openly accessible training dataset with transparent origins, offering the same freedoms for scrutiny and utilization. After all, you can’t trust — or fix — what you can’t inspect.
Next, implement security best practices internally and advocate for greater transparency and accountability from your suppliers. Make it the minimum requirement in your organization to have essential security metadata, such as software bills of materials (SBOMs), SLSA (Supply Chain Levels for Software Artifacts), and SARIF (Static Analysis Results Interchange Format) documents. Many of the open source projects you rely on are maintained by volunteers, so bring help to improve the practices in your upstreams — don’t just make demands of them.
Third, adopt open source security tools into your workflow. Projects such as Allstar, GUAC, and in-toto attestations provide tools you can incorporate to observe and verify your software stack’s security posture. Google has a report that shares how they secure their AI supply chain using provenance information and provides guidance for other organizations looking to do the same.
There is no silver bullet to address security, and even the most careful organizations can find themselves on the wrong end of a compromise. The addition of AI models into the software supply chain only adds more complexity. But there’s no need to panic — you can improve your AI supply chain’s observability with tools and practices available today. Once you understand your supply chain, you can secure it.
No older posts
No newer posts