Defense Secretary Pete Hegseth labeled artificial intelligence company Anthropic a supply chain risk to national security on Friday, following several days of escalating public conflict between the Pentagon and the San Francisco–based firm.
In prepared remarks released by the Department of Defense, Hegseth described Anthropic as a “supply chain risk to national security,” citing concerns about the integrity and governance of key AI providers whose technologies may be used in defense-related systems. The department did not publish a detailed technical assessment of the company’s products or infrastructure.
The dispute began earlier in the week after federal officials raised questions about Anthropic’s role in sensitive technology contracts. Public comments from Hegseth and senior defense officials focused on oversight of commercial AI tools that could be integrated into defense operations. They pointed to potential vulnerabilities in software supply chains, data handling practices, and model reliability.
Anthropic develops large-scale artificial intelligence systems used across commercial sectors. Its AI models are often integrated by third-party vendors into cloud services, analytics platforms, and workflow automation products. Some of those third-party tools are reportedly used by government agencies and defense contractors.
Defense officials expressed concern that reliance on externally developed AI models may introduce security and resilience risks. Hegseth’s statement noted that certain AI providers can become embedded in complex procurement networks that may be difficult to fully audit, particularly where proprietary code, closed training data, or opaque development processes are involved.
The Department of Defense maintains internal guidelines for evaluating software and AI vendors, emphasizing traceability, data protection, and verifiable performance under stress or adversarial conditions. Officials said Anthropic is now under formal review within that framework as a result of the designation.
During the public dispute, Hegseth criticized what he described as insufficient safeguards for systems that could interface with defense or critical infrastructure environments. Pentagon officials also referenced broader concerns about model behavior when exposed to malicious prompts or attempts to extract sensitive information.
Anthropic responded in public statements and media interviews, saying it operates under established internal safety and governance standards. Company representatives stated that their products are primarily designed for commercial customers and that they maintain compliance programs addressing data privacy and model safety. They said they were not aware of any confirmed breach or compromise of defense-related systems linked to their technology.
The episode unfolded amid wider federal efforts to establish guardrails for artificial intelligence use in national security, law enforcement, and critical infrastructure. Policymakers have debated risks associated with reliance on a limited number of major AI providers, including concerns about concentration of control, exposure to foreign influence, and the difficulty of independently verifying model behavior.
The Pentagon said its review of Anthropic and its products remains ongoing. No timeline has been provided for completion of the review or for any potential policy changes that could follow.