The United States government is mad that Anthropic won’t let them use their AI for war
Anthropic refused to let the federal government use its technology for mass domestic surveillance and autonomous weapons. Now, OpenAI has a deal with the Department of Defense
The president is very upset that Anthropic refused to allow the federal government to use its technology for mass domestic surveillance and autonomous weapons. The decision by the tech company angered President Donald Trump, who directed government agencies to stop working with Anthropic. Instead, the federal government will now work with OpenAI, Anthropic’s competitor. The rushed deal with OpenAI has experts and critics scratching their heads. Here’s what we know about OpenAI’s new federal contract.
Anthropic told the federal government no, and President Trump got big mad
The federal government is ending its contract with Anthropic in favor of working with OpenAI. It wasn’t a business decision or about saving money for the government. The switch is because Anthropic executives refused to bend the company’s ethics to allow unrestricted use of its technology.
The demand came from Department of Defense Secretary Pete Hegseth. The former Fox News personality gave the San Francisco-based AI company one week to agree to allow its technology to be used for mass surveillance and autonomous weapons. In a rare show of strength from a U.S. company, Anthropic denied the request from Sec. Hegseth, launching a bitter smear campaign.
President Trump ordered all government agencies to stop all work with Anthropic. The Pentagon, led by Sec. Hegseth added to the smear campaign by designating Anthropic “a supply chain risk.” The designation means that Anthropic is being accused of being a source of vulnerability or a source of disruption. It also causes reputational harm, with businesses thinking twice before working with a supply chain risk.
“We held to our exceptions for two reasons,” reads a statement published by Anthropic. “First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.”
OpenAI rushed a deal with the federal government
OpenAI quickly responded to news of Anthropic offering its services. Sam Altman, the CEO of OpenAI, claims that the deal between the company and the federal government comes with red lines. Altman has stated that the deal makes clear that the technology can’t be used for mass domestic surveillance and autonomous weapons.
“In our agreement, we protect our red lines through a more expansive, multi-layered approach,” reads a statement from OpenAI. “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.”
However, critics aren’t buying it.
The language for the red lines is intentionally vague. In the deal, it states that the technology can be used for “all lawful purposes,” which can be interpreted in myriad ways by federal agencies. Language from the agreement shared by OpenAI seems to offer some wiggle room for the federal government to eventually use the technology for autonomous weapons.
“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols,” reads a publicly released part of the agreement. It later states, “Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.”
AI has a terrifying track record for military simulations
Kenneth Payne, Professor of Strategy at King’s College London, recently ran a test pitting AI against each other in war games. Payne used GPT 5.2, Claude 4, and Gemini 3 Flash and put them into a simulated war. During the 21 games, AI chose nuclear escalation 95 percent of the time. The AIs wrote explanations about why it chose to escalate. According to the study, it was just a logical escalation.
“The tactical threshold was crossed readily: 95 percent of games saw at least some tactical nuclear use,” the study explains. “Models discussed tactical nuclear use as a legitimate coercive tool, treating it as an extension of conventional escalation rather than a categorical boundary.”
There is debate about why the AI programs so easily recommended a nuclear threat. Some believe that it is because humans have a very limited history and understanding of nuclear weapons. This, in turn, means that AI, created by humans, has very limited knowledge as well. Others think that AI doesn’t have the same fear about nuclear weapons as humans do.
Fortunately, the AI models only used it as a tactical response. The threat was enough for the AIs to use regularly without causing damage. However, it did choose three strategic nuclear strikes. One was on purpose, and the other two were by accident.



