Experts have called for greater transparency in AI supply chains as generative AI (GenAI) adoption continues to grow, bringing with it more security and data privacy compliance challenges for enterprises.

One proposed solution gaining traction is the AI Bill of Materials (AIBOM), a framework designed to document the components, data sources and training methodologies behind AI systems to mitigate risks and improve accountability.

The concept builds on Software Bills of Materials (SBOMs), which are structured, machine-readable inventories listing all components, libraries and dependencies used in a software application to enhance transparency and security.

Growing SBOM Adoption Amid Persistent Challenges

During the Software Supply Chain Security Summit hosted by Lineaje, a pre-Black Hat event held on August 5 in Las Vegas, Nick Mistry, CISO at the software supply chain security company, highlighted that the rise of SBOM adoption by security teams has increased software transparency.

He argued this transparency is a critical first step to better secure both open-source and proprietary software that organizations depend on.

According to an Enterprise Strategy Group (ESG) study, approximately 22% of organizations are currently using an SBOM and 4% are planning to do so in the future.

While this number can seem relatively low, Melinda Marks, a practice director of cybersecurity at ESG, stated that it is growing – in part thanks to the adoption of common SBOM formats, such as Software Package Data Exchange (SPDX), introduced by the Linux Foundation, and CycloneDX, proposed by the OWASP Foundation.

“However, 79% of respondents still find it challenging to generate an SBOM, notably because a variety of tools can be used to do so, including software supply chain security solutions, SCAs, CSP features, application security solutions, dedicated SBOM tools or even manual processes,” she said during the Software Supply Chain Security Summit.

AI BOMs Already on The Global Agenda

The concept of the AI BOM has already reached the desks of the highest-level world leaders, according to Allan Friedman, a pioneer and one of the most vocal advocates of SBOMs.

Friedman, who left his role as senior advisor and strategist at the US Cybersecurity and Infrastructure Agency (CISA) in July, explained that the G7 Cybersecurity Working Group agreed in May to develop a joint vision focused on AI security, including the creation of AIBOMs, by their second meeting later in 2025.

“The G7 is made up of diplomats representing the seven richest countries on the planet. Now, I love my friends who work at the State Department, but I'm not going to be leaning on them for the future of critical infrastructure security, the national security concerns and the defense against nation-state attacks,” Friedman noted.

“So, this is an area we, [cybersecurity professionals,] are going to need to be a little more explicit about how we're going to build it and how we're going to build it collaboratively,” he added.

Overall, he agreed that where software has benefitted from transparency, AI would also benefit because “AI is software”.

“The challenge we have is that transparency needs to be semantically relevant to whoever is looking at it. My concern about AI BOMs is that we’re going to implement it before we know what it is,” he added.

AI BOM Standardization Efforts

Several cybersecurity organizations have started working to standardize AI BOMs.

Some, like Sajeeb Lohani, global TISO and senior director of cybersecurity at Bugcrowd, think AI software dependencies should be included in SBOMs rather than in standalone AI BOMs.

The Linux Foundation, for instance, has published a report explaining how to implement AI BOMs with its latest SBOM format, SPDX 3.0.

Similarly, In July, Allan Friedman’s former employer, CISA, introduced an AI SBOM working group, which has built a community-driven resource on GitHub to help organizations apply SBOM practices to AI systems.

Helen Oakley, one of the founders of the working group, also authored a paper for the US National Institute of Standards and Technology (NIST) in September 2024, titled “Securing AI Ecosystems: The Critical Role of AI Bills of Materials (AIBOM) in Mitigating Software Supply Chain Risks.”

Others are still investigating how to best standardize AI BOMs, such as the OWASP Foundation, which has created its own AI BOM working group and is looking to release the “AI BOM Operationalizing Guide and Best Practices Guide Objective,” comprehensive guide detailing the operationalization of AIBOM and its best practices for secure and trusted generative AI systems, in October 2025.