article about DoD threatening Anthropic"

February 17, 2026

Joohn Choe Verified account

The Pentagon is threatening to designate Anthropic a “supply chain risk”, a punishment normally reserved for foreign adversaries, after months of failed negotiations over AI safeguards collided with the revelation that Claude was used during the January 3, 2026 military raid that captured Venezuelan President Nicolás Maduro.

This is the most consequential clash yet between Silicon Valley’s AI safety commitments and the U.S. military’s demand for unrestricted military AI, and it’s worth examining to see what’s going on and what it could mean.

TIMELINE: HOW WE GOT HERE

Claude’s usage policy, as established in June 2024, sets forth several restrictions relevant to mass surveillance. Per the June 2024 guidelines (Anthropic), Claude should not be used to:

A section is also included on weapons. Claude users are prohibited from using Claude to:

It is notable in this context that although Claude’s guidelines include a section specifically titled “Do Not Compromise Computer or Network Systems,” Claude Code was nonetheless used in a hacking campaign by the People’s Republic of China around September 2025 (Anthropic).

November 7, 2024 - Anthropic, Palantir, and AWS announced a three-way partnership to deploy Claude 3 and 3.5 models on Impact Level 6 (IL6) classified networks - the highest security classification for DoD systems (NASDAQ).

July 14, 2025 - Anthropic announced a two-year prototype Other Transaction Agreement (OTA) with a $200 million ceiling, awarded by the DoD’s Chief Digital and Artificial Intelligence Office (CDAO), led by Doug Matty (Anthropic). The contract covered prototyping frontier AI capabilities for national security, adversarial AI risk forecasting, and technical exchanges. At the same time, CDAO awarded similar $200 million contracts to OpenAI, Google, and xAI - all four major frontier AI labs (AI.mil).

August 15, 2025 - Anthropic updated its Usage Policy, explicitly banning the use of Claude to aid in the development of chemical, biological, radiological, or nuclear weapons and explicitly forbidding the use of Claude to analyze biometric data to infer characteristics like race or religion, or for emotional analysis in interrogation contexts (Anthropic).

Also in August 2025, the CIA “quietly sent a small unit into Venezuela with the goal of providing ‘extraordinary insight’ into Maduro’s movements, according to a person with knowledge of the matter. Even his pets were known to U.S. intelligence agents,” Dan “Raizin” (yes, that is his nickname) Caine, chairman of the Joint Chiefs of Staff, said at a news conference (NBC).

January 3, 2026 - The U.S. launched Operation Absolute Resolve. The military bombed infrastructure across northern Venezuela to suppress air defenses while 200+ special operators from Delta Force and the FBI attacked Maduro’s compound at Fort Tiuna in Caracas. Over 150 military aircraft launched from 20+ sites. Delta Force breached the residence; Maduro and his wife Cilia Flores were “taken completely by surprise” and flown to the USS Iwo Jima, then to Stewart Air National Guard Base in New York, then by helicopter to Manhattan (NBC).

Claude was used during the active operation. While Axios could not confirm the precise role Claude played in the capture, the Wall Street Journal reported that Claude was used during the operation itself, not just in preparations for it. Following the raid, an employee at Anthropic asked a counterpart at Palantir how Claude had been used, according to people familiar with the matter (WSJ; Axios).

January 15, 2026 - Secretary of War Pete Hegseth told a crowd at SpaceX headquarters, apparently referring to Claude (Defense Dept):

“We will not employ AI models that won’t allow you to fight wars. We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war ready weapons and systems, not chatbots for an Ivy League faculty lounge.”

February 13, 2026 - The Wall Street Journal broke the story that Claude was used during the Maduro raid (WSJ). This was the first public confirmation of a commercial AI model being used in a classified military combat operation.

February 15, 2026 - Axios published the first exclusive, “Pentagon threatens to cut off Anthropic in AI safeguards dispute.” The article detailed months of negotiations in which the Pentagon demanded the right to use Claude for “all lawful purposes” - presumably including lethal operations - while Anthropic insisted on two hard limits:

  1. No mass surveillance of Americans

  2. No fully autonomous weapons

Pentagon officials described Anthropic’s restrictions as “unduly restrictive” with “all sorts of gray areas.” A senior War Department official called Anthropic the most “ideological” of the AI labs. The article revealed that OpenAI, Google, and xAI had all agreed to remove their safeguards for unclassified military systems, and one of the three had already agreed to the full “all lawful purposes” standard (Axios).

February 16, 2026 - Axios published the escalated follow-up by Dave Lawler, Maria Curi, and Mike Allen: “Pentagon threatens to label Anthropic’s AI a ‘supply chain risk.’” The article revealed that Hegseth was “close” to designating Anthropic a supply chain risk - a penalty usually reserved for foreign adversaries - which would require every company doing business with the Pentagon to certify they don’t use Claude. A senior Pentagon official stated: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.” Chief Pentagon spokesman Sean Parnell confirmed: “The Department of War’s relationship with Anthropic is being reviewed” (Axios).

WHAT HAPPENS NOW

Before this escalated with Hegseth, Anthropic could have, realistically, simply walked away.

Anthropic already faces internal pressure from engineers over the Pentagon work, and the $200 million contract is about 1.4% of Anthropic’s reported $14 billion in annual revenue. Claude is also the only AI model currently on classified networks, and senior administration officials admit that other models are “just behind” in government applications. All of this points to strong leverage on Anthropic’s part.

The supply chain risk designation puts the situation at a new level, however. The potential impact extends far beyond the contract’s cancellation - effectively, it would force eight of the ten largest U.S. companies, many of which also do business with the Pentagon, to purge Claude from all their systems (Axios).

We saw a similar drama with law firms early in the Trump administration. Paul, Weiss; Skadden; Kirkland & Ellis; Cadwalader; and Simpson Thacher all caved, committing to a combined $1 billion in pro bono legal work on behalf of the administration.

But Perkins Coie, Jenner & Block, and WilmerHale all defied the Trump administration, sued, and won.

The punitive nature of the supply chain risk designation, Anthropic’s flamboyant leadership, and the apparent sentiment among rank-and-file developers who work on Claude all make it an interesting question how this is all going to shake out.

-–

Full post free before the paywall on Patreon, sources & predictive model output for subscribers after the paywall (link in bio)

Reply via email

© 2026 rcanzlovar.com | About | Contact | Privacy Policy | RSS Feed