The hypothetical nuclear attack that escalated the Pentagon’s showdown with Anthropic
Start-up Anthropic and the U.S. military are careening toward a clash over government use of artificial intelligence — and whether it should be allowed to kill.

As a standoff between artificial intelligence firm Anthropic and the Pentagon deepened this week, the two sides offered starkly different accounts of a key discussion about a hypothetical nuclear strike against the United States, revealing the intensity of their showdown over the American military’s potential use of lethal autonomous weapons.
A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.
An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.
A face-to-face meeting Tuesday between Amodei and Defense Secretary Pete Hegseth escalated the situation, the Washington Post reported. The two sides are now careening toward a defining power struggle over whether the U.S. government should have the freedom to spy on or kill humans using the potent new technology, based in part on extreme hypotheticals and games of telephone.
The Pentagon had given Anthropic until 5:01 p.m. Friday to drop its objections to using Claude in relation to autonomous weapons and mass surveillance of U.S. citizens. If not, officials had said they may use government authority to force Anthropic to hand over the technology anyway — while also blacklisting the company from future defense work.
Sean Parnell, the Pentagon’s chief spokesman, said in an X post Thursday that the department had no interest in conducting mass domestic surveillance nor deploying autonomous weapons, but wanted to use AI for “all lawful purposes.”
“This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk,” Parnell said.
Amodei said in a statement late Thursday that his company was ready to continue working with the Pentagon, but would not change its stance. Current AI systems are not reliable enough to power robotic weaponry without putting troops and civilians alike at risk, he said, and existing laws on domestic surveillance do not account for the sweeping potential of AI snooping tools.
“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in his first public comments on the battle. “Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now.”
Anthropic did not expect to end up in a fight with Pentagon leaders when it became the first major AI lab to strike a deal to work on classified U.S. military networks in late 2024. But the dispute highlights how the startup, founded in 2021 by safety-minded refugees from ChatGPT-maker OpenAI, has struggled to deftly navigate Washington in the second Trump administration. (The Washington Post has a content partnership with OpenAI.)
Anthropic recently added a former deputy chief of staff to President Donald Trump to its board and explored taking investment from a fund led by Donald Trump Jr., according to people familiar with the pitch. Yet its leaders have also repeatedly clashed with the White House in public.
In a coruscating post on X in October, David Sacks, Trump’s top AI adviser, accused the company of “fear-mongering” and pursuing “regulatory capture” in an attempt to bend the government to its will. Anthropic leaders have criticized one of the administration’s key AI policies in recent weeks, even as the dispute with the Pentagon was brewing.
“There’s the subtext of Anthropic not being aligned with the MAGA agenda,” said Steven Feldstein, a senior fellow at the Carnegie Endowment, who researches the use of AI in war. “This is as much of a political fight as a military use issue.”
Experts say the outcome of the clash could shape the trajectory of the burgeoning relationship between the AI industry and the U.S. military, potentially signaling to other leading firms that the cost of doing business with the Pentagon could be losing control of their innovations.
Unlike a gun or a jet engine, the uses that AI might find on future battlefields keep changing. The U.S. already pushes autonomy into its weapons and AI-enabled systems are a part of almost every drone, ship, or aircraft under production or envisioned in the future force. The Trump administration is embarking upon a vast expansion of the military’s use of AI.
But leading figures in the development of the technology have long had ethical and legal concerns about giving AI the power to make life-and-death decisions or turbocharging surveillance.
Emil Michael, a former Uber executive who is now undersecretary of defense for research and engineering, has taken the lead in the discussions with Anthropic. He has argued the government and not individual tech firms should have the final say in how the technology is used, according to a person familiar with the discussions, who spoke on the condition of anonymity to describe internal deliberations. Michael did not respond to a request for comment.
To the Pentagon that means having a policy permitting what Parnell called “all lawful purposes.” Amodei has held firm that Anthropic has red lines around autonomous weapons and surveillance, a stance that has won support from his employees and could serve as a recruiting tool for idealistic engineers as the company heads toward an expected initial public offering.
Late Thursday, Michael accused Amodei of having a “God-complex” in a post on X. “He wants nothing more than to try to personally control the US Military and is OK putting our nation’s safety at risk,” Michael wrote.
The escalating dispute has baffled people who study how the military uses AI.
Dean Ball, a former Trump administration AI adviser, said he hoped the two sides could still find a way to step back from the brink. “The solution to that problem is to cancel the contract,” Ball said. “Going on a jihad against Anthropic is whole other layer of escalation.”
Leapfrogging off Amazon
Anthropic owes its head start at the Pentagon in part to a partnership the intelligence community forged with Amazon in 2013, which paved the way for classified material to be handled in Amazon’s cloud. Over the course of the next several years, the tech giant built out secure computing infrastructure for the intelligence community, beating out rivals for coveted contracts to house classified and top secret data.
In 2023 and 2024, Amazon invested billions into Anthropic. The relationship greased the AI start-up’s path into the military’s closely guarded systems, according to a person familiar with the relationship, who spoke on the condition of anonymity to describe it. Amazon declined to comment. (Amazon founder Jeff Bezos owns the Post.)
Anthropic also found an ally in software analytics firm and longtime defense contractors Palantir, which in 2024 teamed up with the AI firm and Amazon to offer Claude on its systems used by military and spy agencies. Anthropic said the partnership would boost the military’s ability to process huge amounts of data and make good decisions, saying it was proud to take on the work.
Anthropic has “first mover status and their product is good,” said another person familiar with the military’s work with AI companies, who spoke on the condition of anonymity to describe sensitive issues relating to national security.
Since Claude’s deployment with the Pentagon, Anthropic said Thursday, its technology has been put to use analyzing intelligence, planning operations and in cyberwarfare. The company has deepened its work with the government since Trump returned to office and pushed federal agencies to rapidly scale up their use of AI. In July it signed a $200 million contract with the Defense Department and made a deal the following month to provide its system to civilian agencies for a dollar apiece.
But the company’s advantage has eroded as competitors like Google, OpenAI and xAI make deals of their own with the Pentagon. Officials say the other leading firms have agreed to its “all lawful purposes” policy for unclassified work, and that xAI has also signed a deal for classified systems. The three companies did not respond to requests for comment.
Anthropic has differed from its rivals in simultaneously courting the administration for contracts while opposing it in other areas of policy.
When the White House was pushing an executive order that would preempt restrictive state-level AI laws this winter, Anthropic was promoting a safety-oriented AI bill in California.
Amodei has also criticized the Trump administration’s drive to allow exports of American AI chips to China. On the sidelines of the World Economic Forum in Davos, Switzerland last month, Amodei compared the policy to “selling nuclear weapons to North Korea.” After meeting with Amodei this month on Capitol Hill, Sen. Elizabeth Warren (D., Mass.) said she would introduce legislation to sharply limit any exports.
Anthropic has also hired several former Biden administration officials.
“The administration just wants everyone to bend the knee and [Amodei] won’t,” said an investor who works on defense technology, who spoke on the condition of anonymity to avoid getting into conflicts with any of the parties.
In the past year, Anthropic has made moves that could smooth its relationship with the Trump administration. The company ramped up its lobbying in Washington, spending $3.1 million and bringing on a former senior aide to Secretary of State Marco Rubio, according to disclosures compiled by transparency group Open Secrets. It announced this month that it was adding Chris Liddell to its board, a former tech executive who served in the first Trump White House.
The company also recently explored an investment from the Trump-allied venture capital firm 1789 Capital for funding, but was turned down, according to two people familiar with the pitch, who spoke on the condition of anonymity to describe private business discussions. Donald Trump Jr. is a partner at the firm, alongside Chris Buskirk, an ally of Vice President JD Vance.
‘Once and for all’
Insiders in the world of defense technology argue that the current fight between the Pentagon and Anthropic appears to be more philosophical than technical, and that the administration had already soured on the AI company — even as rank-and-file military personnel were finding its services increasingly useful.
“The administration and the Republicans are looking for ways to get rid of Anthropic once and for all,” the person familiar with the military’s work with AI companies said. The Pentagon clash could provide an opportunity to carry that through. In January, Hegseth issued a directive for the military to embrace AI as though the country were at war.
The U.S. has committed to some guardrails on autonomous weaponry. France, the United Kingdom, China, and the U.S. all previously said they would require a human to be involved in all decisions to deploy nuclear weapons. In a statement to the Post, the Pentagon said the Trump administration intends to maintain that pledge.
“It remains the Department’s policy that there is a human in the loop on all decisions on whether to employ nuclear weapons,” a senior defense official said. “There is no policy under consideration to put this decision in the hands of AI.”
But that still leaves room for AI to influence decisions on targets and speed of response. In a recent nuclear war game at King’s College London, many leading language models including versions of ChatGPT, Claude and Google’s Gemini all quickly favored launching warheads. That could influence a human’s decision to fire, said Paul Dean, vice president of the global nuclear program at the nonprofit Nuclear Threat Initiative.
“It’s not simply ensuring that there’s a human being in the decision-making loop,” Dean said. “The question is, to what extent will AI impact that human decision-making?”
Neither side in this week’s faceoff knows for certain what AI’s use in war will ultimately look like, but both seem unwilling to trust in the other’s future decisions.
“The Pentagon does not trust that Anthropic will be a reliable vendor, and Anthropic worries about misuse of its technology,” said Michael C. Horowitz, a director at the University of Pennsylvania who oversaw AI weapons policy during the Biden administration.
Because Claude is already in use across the Defense Department, exiling Anthropic and switching to a rival could prove costly. Although Defense officials have suggested they could use the Defense Production Act to force the AI company to share its systems, experts are split on whether the law could be applied.
Doing so would send a chilling message to the AI firms the Pentagon hopes to lean on that they may risk of having their own innovations seized if the government sees something it wants.
That would cross a troubling line, said Katie Sweeten, a former liaison for the Justice Department to the Pentagon, and a partner at Scale LLP. “This is a literal nuclear option which I think rightfully companies should be very concerned about.