Source code powering Anthropic's Claude AI leaked for the second time in over a year, prompting a congressional warning about national security vulnerabilities.
Rep. Josh Gottheimer (D-N.J.) sent a letter to Anthropic CEO Dario Amodei on Thursday, demanding explanations for the leaks and changes to internal safety policies.
The letter, shared exclusively with Axios, frames the issue as a potential threat to U.S. competitive advantage in artificial intelligence.
Gottheimer, a leading House Democrat on AI and cybersecurity, tied the leaks directly to defense applications.
Rep. Josh Gottheimer (D-N.J.):
"Claude is a critical part of our national security operations. If it is replicated, we sacrifice the competitive edge we have worked so diligently to maintain in all facets of our national security."
The lawmaker walked a careful line, expressing support for Anthropic's role in government contracts while pressing for stronger safeguards. He noted his concern about the Trump administration's decision to block Claude from federal procurement, warning it could hinder the U.S. position against China in AI development.
Gottheimer also questioned Anthropic's decision to roll back certain internal safety protocols, pointing to a Chinese Communist Party-backed hacking incident targeting Claude last year. He raised concerns that the upcoming Claude model, Mythos, could be exploited for cyberattacks if security measures remain weakened.
Anthropic offered a different characterization of the incident.
Anthropic spokesperson:
"A Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
The exchange underscores a broader tension: as AI systems become embedded in critical infrastructure and defense, the threshold for what counts as a security incident continues to shift.
Gottheimer's letter asks Anthropic to detail Claude's capabilities, assess risks of malicious use, and outline protections against future attacks from outside parties.
Those answers, and how the company balances transparency with security, will shape whether advanced AI remains viable for sensitive government work.
Source: Axios