The regulatory vacuum: why the Anthropic-Pentagon dispute is a story about Congress, not corporate defiance
A policy analysis of the February 2026 AI governance crisis
On February 27, 2026, the Trump administration designated Anthropic — the American AI company behind Claude — a “Supply-Chain Risk to National Security.” The designation, previously reserved for foreign adversaries such as Kaspersky Labs and Chinese semiconductor suppliers, was triggered by Anthropic’s refusal to remove two restrictions from its $200 million Pentagon contract: a prohibition on the use of its AI for mass domestic surveillance of Americans, and a prohibition on fully autonomous weapons systems operating without human oversight.
Within hours, OpenAI secured its own Pentagon deal for classified networks, reportedly under terms that included the same two safety provisions Anthropic had been punished for defending. The apparent contradiction was striking, but the public discourse that followed missed the deeper structural issue entirely.
Media coverage, government rhetoric, and public debate converged on a single framing: Should a private company or the federal government decide how military AI is used? This framing is not merely reductive. It is dangerous, because it erases the institution that should be making these decisions: the United States Congress.
The Binary That Obscures the Real Question
The dominant narrative presented the dispute as a confrontation between corporate power and executive authority — tech billionaires versus the Pentagon. A CBS interview with Anthropic CEO Dario Amodei on the evening of February 28 exemplified this pattern. The interviewer pressed repeatedly on a single axis: Why should a private company have more say than the Department of Defense? Why should Americans trust a CEO over the federal government?
The question sounds reasonable until one examines what Amodei actually said. Across a thirty-minute interview, Amodei made the same point at least four times: he does not believe a private company should hold this authority permanently. He explicitly stated that Congress needs to legislate guardrails for military AI — particularly in areas where the technology has outpaced existing law. He described the current situation as untenable and called for democratic action. The interviewer did not pursue this line. The framing remained locked on a binary — corporate versus government — that excluded the democratic process altogether.
This is not a media critique. It is a governance problem. When the public conversation about AI regulation is reduced to a power struggle between two actors — both operating without legislative mandate in this specific domain — the possibility of democratic oversight disappears from the discourse. And when it disappears from the discourse, it disappears from the political agenda.
Two Red Lines, Two Legislative Gaps
Anthropic’s two restrictions — no mass domestic surveillance, no fully autonomous weapons — are not arbitrary corporate preferences. They correspond to two areas where AI capabilities have overtaken the legal framework.
On surveillance: as Amodei detailed in his official statement of February 26, under current law the government can purchase detailed records of Americans’ movements, web browsing, and associations from commercial sources without obtaining a warrant — a practice the Intelligence Community itself has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. What AI changes is the scale: powerful models can now assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life, automatically and at massive scale. The practice is not illegal. It was simply never useful before the era of large language models and advanced analytics. The judicial interpretation of the Fourth Amendment and the laws passed by Congress have not caught up to what is now technically possible.
On autonomous weapons: the concern is not about the partially autonomous systems currently deployed in Ukraine or under development for Taiwan contingencies. It is about fully autonomous weapons — systems that identify, select, and engage targets without any human involvement. Amodei argued, with technical specificity, that current AI systems are not reliable enough for this application, citing the fundamental unpredictability that anyone working with these models recognizes. Beyond reliability, there is an accountability question: if a fleet of coordinated autonomous systems operates under a single command node, the traditional chain of military accountability — built on the assumption that human soldiers exercise judgment at multiple levels — collapses. Anthropic offered to work directly with the Department of War on R&D to prototype these systems in a controlled environment. The Pentagon declined unless it could deploy without restrictions from the outset.
Neither of these concerns is ideological. Both are structural. And both point to the same conclusion: legislation has not kept pace with capability.
The Precedent Problem
The administration’s response to Anthropic’s position set a precedent that extends well beyond AI policy. The supply chain risk designation — a tool designed to protect national security from foreign adversaries — was deployed against an American company for exercising contractual discretion. The message to every technology firm doing business with the federal government was unmistakable: compliance is not negotiated; it is compelled.
The internal contradiction of the government’s own position deserves emphasis. As Amodei noted in his February 26 statement, the administration simultaneously threatened two actions that are logically incompatible: designating Anthropic a supply chain risk (which labels the company a security threat to be excluded) and invoking the Defense Production Act to compel continued service (which labels the company’s technology as essential to national security). A company cannot be both a threat to be quarantined and an asset to be conscripted. The coexistence of these two threats reveals that neither was grounded in a genuine security assessment; both were instruments of coercion.
The punitive character of the action is further underscored by Anthropic’s broader record. This is a company that voluntarily forfeited several hundred million dollars in revenue by cutting off access to firms linked to the Chinese Communist Party — some of which had been designated by the Department of War as Chinese Military Companies. It shut down
CCP-sponsored cyberattacks targeting its systems and advocated for strong export controls on AI chips to maintain a democratic advantage. Whatever one thinks of Anthropic’s red lines, the suggestion that this company is a threat to American national security is not supported by its record.
If the precedent holds that a principled disagreement over two narrow use cases — representing, by Anthropic’s account, roughly one percent of deployed applications — can trigger a national security designation, then the space for any company to maintain safety restrictions narrows to zero. The result is an environment in which corporate safety commitments become performative — maintained only until they conflict with executive preference.
What Congressional Action Could Look Like
The policy vacuum at the center of this dispute is not abstract. It has concrete dimensions that Congress could address through legislation. First, the data broker loophole: a statutory framework governing the government purchase and AI-enabled analysis of commercially collected personal data would close the surveillance gap that both Anthropic and bipartisan voices in Congress have identified. Several legislative proposals have circulated in previous sessions — none have advanced to a vote. Second, autonomous weapons oversight: a statutory requirement for meaningful human control in lethal autonomous systems, with defined thresholds for what constitutes “meaningful,” would establish the accountability framework that both Amodei and military ethics scholars have called for. Third, the weaponization of supply chain designations: Congressional review mechanisms for the application of national security designations to domestic companies — particularly where the designation is retaliatory rather than protective — would prevent the tool from being used as a coercive instrument against lawful commercial actors.
None of these proposals are novel. What is missing is political will — and a public discourse that demands legislative action rather than accepting the false binary of corporate versus executive control.
The Democratic Deficit in AI Governance
The Anthropic-Pentagon dispute of February 2026 will likely be remembered as the moment AI governance became a national security flashpoint. But the most important feature of the crisis is not what the two parties said to each other. It is what was absent from the conversation: a legislative framework that makes these confrontations unnecessary.


