Pentagon assault on Anthropic sends shockwaves across Silicon Valley
The Trump administration’s declaration that AI company Anthropic would be cut off from all government contracts hardens political and cultural battle lines across Silicon Valley

The Trump administration’s declaration that AI company Anthropic would be cut off from all government contracts shook the tech industry late Friday, hardening political and cultural battle lines across Silicon Valley over military use of artificial intelligence.
President Donald Trump ordered government agencies to “immediately cease” using Anthropic’s technology, in a post on Truth Social on Friday, and Defense Secretary Pete Hegseth labeled the company a “supply chain risk to national security” in his own post on X, after the company refused to allow its technology to be used for domestic surveillance and autonomous weapons.
The Trump administration’s assault on Anthropic appeared to put the company on course to lose billions of dollars of potential revenue, although the startup said in a blog post late Friday that it would challenge Hegseth’s designation in court.
The firm’s conversational assistant, Claude, is being deployed or tested in at least five government agencies, including the Pentagon, the Department of Health and Human Services, the Department of Homeland Security, and the Department of Energy, according to recent disclosures of AI use mandated by law and an executive order.
Friday’s aggressive moves by the Trump administration put all of Silicon Valley on notice that tech companies seeking Pentagon contracts risk massive political and business fallout if they don’t back administration policies and cede control of how their technology is used. Rivals of Anthropic including Elon Musk and other tech allies of Trump seized on the conflict to pledge that their own companies would not question Pentagon policies, positioning themselves as loyal patriots.
Conflict has bubbled between Anthropic and the Trump administration since last year. The company leveraged its relationship with investor Amazon to become the first company to be integrated into classified systems.
But Anthropic, co-founded in 2021 by CEO Dario Amodei, his sister Daniela, and other former employees of ChatGPT-maker OpenAI, also rankled tech allies of Trump by positioning itself as more safety conscious than other AI developers. (Amazon founder Jeff Bezos owns the Washington Post, which has a content partnership with OpenAI.)
In the fall, Trump’s AI and crypto czar David Sacks accused Anthropic of attempting to manipulate the government with “fearmongering” about AI technology. Around the same time, Semafor reported that Anthropic displeased the White House by raising ethical objections to how the administration wanted to use its technology, including for surveillance.
Those tensions flared into an unprecedented public fight between the Pentagon and the tech company this week. Frantic talks between the two sides continued right up until Hegseth’s announcement late Friday that he was declaring Anthropic a risk to national security, according to an X post from Emil Michael, the Pentagon’s technology chief, and a person familiar with the talks.
Michael was on the phone with Anthropic, suggesting that the company agree to allow analysis of some bulk data on Americans, at the same moment Hegseth said in his X post that Anthropic had been designated a supply chain risk, according to the person, who spoke on the condition of anonymity to discuss the talks.
Anthropic said in a statement responding to Hegseth on Friday that it would legally challenge his declaration against the company, suggesting that the dispute is far from over. Experts said that Anthropic had strong legal grounds for a challenge.
A company can only be designated a supply chain risk through a legal process, said Steven Feldstein, a senior fellow at the Carnegie Endowment for International Peace who researches the use of AI in war. “It isn’t legally sufficient to simply proclaim or label [a supply chain risk] and have this be the final word,” he said. “It’s a major overreach.”
Jessica Tillipman, an associate dean at George Washington University’s law school, said Anthropic could probably make a strong argument in court that it had been unfairly targeted. “This is on incredibly shaky ground,” she said of Hegseth’s declaration on Friday. “I don’t think you have seen a case for more politicized use.”
Hegseth’s post also asserted that all companies that do business with the U.S. military are now prohibited from doing any commercial activity with Anthropic. Although the legal basis for that sweeping ban was unclear, it could have disastrous consequences for Anthropic, which has received billions of dollars in investment from partners like Amazon, Microsoft, and Nvidia that also supply the military. The companies didn’t respond to requests for comment.
Should the Pentagon prevail, the U.S. military will need to adapt fast. Claude is deeply integrated into the Maven Smart System, an AI tool built with the technology company Palantir that runs on Amazon’s cloud. It provides troops with a unified picture of intelligence streaming in from multiple sensors, said retired Air Force Lt. Gen. Jack Shanahan, who served as the first director of the Pentagon’s Joint Artificial Intelligence Center and is now an adjunct senior fellow at the Center for a New American Security, a think tank.
After the U.S. seizure of Venezuelan strongman Nicolás Maduro, an image circulated that showed Claude operating alongside Maven during the operation, Shanahan said, which prompted Anthropic officials to ask Palantir questions about its use in the operation.
Claude is the “single most widely deployed AI system in the U.S. military,” Shanahan said. He added that it wouldn’t make sense to try to extract the AI tool from all of the Defense Department systems it helps, just as service members are getting skilled with the technology.
In Silicon Valley, debate raged Friday over whether Anthropic should be celebrated for taking a stand, criticized as unpatriotic, or scoffed at for being strategically naive.
Right-leaning leaders such as Palmer Luckey, founder of the defense startup Anduril, and investor Keith Rabois posted in support of the military’s decision. Anthropic employees cheered its moves in online posts, and hundreds of employees of Google and OpenAI signed a public letter backing the company’s stance.
Anthropic’s rivals were poised and at the ready to take advantage of its blunders.
OpenAI chief executive Sam Altman wrote in a memo to all staff late on Thursday that he had been negotiating with the Pentagon, according to a copy reviewed by the Post. The memo was first reported by the Wall Street Journal.
Altman wrote that the dispute between Anthropic and the Pentagon had become “an issue for the whole industry,” and that the spat was not about the use of AI but about “control.” The country, he said, “absolutely needs help with AI for defense if we want to continue to enjoy peace and prosperity.”
But Altman added that he was seeking a deal with the Defense Department that would find middle ground. It would see OpenAI agree to cover any use except those that are “unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons,” he wrote. And he said the company could deploy technical safeguards and personnel “to partner with the government to ensure things are working correctly.”
Late on Friday, Altman wrote in a post on X that he had reached such an agreement with the Defense Department to deploy OpenAI’s technology in classified U.S. networks.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. The Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Jeremy Lewin, under secretary of state for foreign assistance, humanitarian affairs, and religious freedom, wrote in a post on X that the new OpenAI deal permitted the Pentagon the freedom of “all lawful use” of AI that it had sought from Anthropic. The agreement represented “a compromise that Anthropic was offered, and rejected,” he wrote.
Musk, whose company xAI was certified to work with classified military systems this week, also stepped into the fray. “Anthropic hates Western civilization,” he wrote in a post Friday on his social network X. Musk and xAI did not respond to requests for comment.
Lewin held up the billionaire as showing a better way for AI firms to engage with the government.
“Elon and xAI have already agreed to the ‘all lawful uses’ principle — meaning that he’s already agreed not to shut off U.S. systems for nonlegal prudential discretionary reasons,” Lewin, a former staffer for Musk’s government efficiency initiative, the U.S. DOGE Service, wrote on X. “So there’s your difference. Anthropic wants to add additional conditions — Elon has agreed to promise he won’t pull the plug for our systems.”