When Saying No Becomes a National Security Threat: The Anthropic Injunction and What It Means for Free Speech
The question at the center of this case is not really about artificial intelligence. It is about whether the federal government can destroy a private company for disagreeing with it out loud.
On March 26, 2026, a federal judge in San Francisco answered that question, at least for now. Judge Rita F. Lin granted Anthropic’s request for a preliminary injunction, a court order that temporarily blocks the government from enforcing a series of actions taken against the AI company in late February. Those actions included a presidential directive ordering every federal agency to immediately stop using Anthropic’s technology, and a Pentagon designation labeling Anthropic a “supply chain risk to national security,” a designation previously reserved for foreign adversaries like Huawei.
The order is paused for seven days to give the government time to appeal. This is not over. But how the court got there, and who showed up to weigh in, matters far beyond this particular fight.
How We Got Here
Anthropic makes Claude, the AI model the Pentagon had been using on its classified systems since 2024. From the beginning, Anthropic maintained two limits on how Claude could be used, even by the military. Claude could not be used to make lethal decisions without a human in the loop. And it could not be used to conduct mass surveillance of American citizens.
The Pentagon accepted these conditions when it signed a $200 million contract with Anthropic in July 2025, and again when Claude was extended across all three branches of government a month later. Anthropic had spent 18 months going through national security background checks. The government granted it a Top Secret facility clearance. Nobody raised any concerns.
In the fall of 2025, the Pentagon decided it wanted those limits removed. It demanded what it called “all lawful uses,” meaning no restrictions at all. Anthropic agreed to nearly everything except the two lines it considered non-negotiable. Negotiations stayed polite. Anthropic even offered to help the Pentagon transition to a different vendor if a deal could not be reached.
Then Anthropic’s CEO, Dario Amodei, published a public statement explaining the company’s position.
What followed was immediate and severe. President Trump posted on Truth Social directing every federal agency to cease all use of Anthropic’s technology, calling the company “radical left” and its employees “leftwing nut jobs.” Hours later, Defense Secretary Pete Hegseth posted on X declaring Anthropic a supply chain risk to national security and announcing that any company doing business with the U.S. military was also forbidden from doing business with Anthropic. He called this “final.”
The supply chain risk designation is a legal tool Congress created in 2011 to deal with a specific problem: foreign governments, particularly China, trying to sabotage American defense systems by slipping compromised technology into the supply chain. It had never been used against an American company. Anthropic is a Delaware public benefit corporation headquartered in San Francisco, with a current Top Secret government clearance.
The First Amendment Problem
To win a First Amendment retaliation claim, a company has to show three things. First, that it was doing something the Constitution protects, like speaking publicly on a matter of public concern. Second, that what the government did would scare a reasonable person out of continuing to speak. Third, that the speech was actually what motivated the government’s actions.
All three were present here, but the third was the easiest to prove, because the government’s own documents said so.
Anthropic had held the same usage restrictions for over a year. During that entire time the Pentagon praised the company, expanded its contracts, and never once suggested it posed any kind of security risk. The escalation happened immediately after Amodei went public with his position. Both the Trump Truth Social post and the Hegseth statement explicitly attacked Anthropic’s rhetoric, ideology, and attitude in the press. The internal Pentagon memorandum that supposedly justified the supply chain risk designation stated directly that Anthropic’s “risk level escalated” because it was “engaging in an increasingly hostile manner through the press.”
That sentence is what lawyers call a concession. You cannot use a national security law designed to stop foreign saboteurs as a weapon against a company because you do not like what its CEO said in the newspaper. Judge Lin described the government’s theory as “Orwellian,” writing that nothing in the law “supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The government argued the contract dispute, not the public statement, was what caused the ban. The court rejected this. Anthropic had held the same contractual position for over a year without triggering any of this. The government could not explain why a company that had been praised as a trusted national security partner became an unacceptable threat in the same week its CEO published an essay.
Why Anthropic’s Two Red Lines Matter
While the legal arguments were playing out in a San Francisco courtroom, the real-world consequences of ignoring those two red lines were playing out on a battlefield. And what actually happened is more troubling than most of the coverage suggested.
During the U.S. military’s operations against Iran beginning on February 28, 2026, the Pentagon’s AI targeting system processed data from sensors, satellite imagery, and other sources to generate strike recommendations at a speed and scale previously impossible. The military struck roughly 1,000 targets in the first 24 hours alone, double the pace of the 2003 shock and awe campaign in Iraq. The difference, Pentagon officials said, was artificial intelligence.
The system was Palantir’s Maven Smart System, built over six years after Palantir took over the contract Google abandoned in 2018 following a staff revolt. Maven is not a large language model. Its core technology is computer vision and sensor fusion software that predates large language models by years. It pulls together satellite imagery, signals intelligence, and sensor data to move a target through every stage of the process from initial detection to the order to strike. Claude was added to Palantir’s ecosystem years after the core system was operational, as an interface layer that allows analysts to search and summarize intelligence reports in plain English. Claude did not detect targets, process radar, fuse sensor data, or recommend strikes. The public conversation about Claude and the school bombing was, in the precise words of one analyst covering this, a distraction organized by the charisma of a familiar technology around a catastrophe that had nothing to do with it.
One of those strikes hit the Shajareh Tayyebeh primary school in Minab, in southern Iran. Between 175 and 180 people were killed. Most of them were girls between the ages of seven and twelve, killed during the morning school session.
A subsequent investigation found the building had been classified as a military facility in a Defense Intelligence Agency database. That database had not been updated to reflect that the building had been separated from the adjacent Revolutionary Guard compound and converted into a school, a change that satellite imagery shows had occurred by at least 2016. The school appeared in Iranian business listings. It was visible on Google Maps. A search engine could have found it. At 1,000 targeting decisions an hour, nobody searched.
A chatbot did not kill those children. A system specifically designed to eliminate deliberation did.
This is the point Anthropic was making when it insisted that lethal autonomous targeting without human oversight was something it was not willing to enable. Not because Claude would have made a different decision in that specific targeting chain, it was not involved in that chain at all. But because the entire architecture of speed-first targeting, in which deliberation is treated as inefficiency to be engineered out of the process, is precisely what Anthropic’s red line was designed to resist.
AI systems are only as good as the data they were trained on. That data reflects the world as it existed at some point in the past, along with the choices, priorities, and blind spots of the people who assembled it. In a targeting context, information that is weeks or years out of date can be the difference between a military installation and a school full of children. Research has shown that AI targeting systems can associate protected characteristics, including racial and religious identity, with violence and other negative sentiments. A system’s inability to perform correctly when presented with data it has never encountered before can produce outcomes with no rational basis.
Then there is what researchers call automation bias. Humans presented with a machine recommendation tend to follow it. In a high-tempo operational environment, with targets being identified and strikes authorized in seconds rather than hours, the space for human reflection shrinks dramatically. The human in the loop can become a formality rather than a safeguard. Research on the anti-ISIS campaign showed that the civilian casualty rate was approximately 35 times higher than the U.S. military had estimated, with documented cases of sites struck because the underlying intelligence data was inaccurate or out of date.
Anthropic knew this. It said so. It drew two specific lines around the two use cases it believed its product was not ready to handle safely. Those were not arbitrary restrictions. They reflected the company’s direct knowledge of what Claude could and could not reliably do. The government’s response was to designate Anthropic a national security threat for saying so out loud.
The mass surveillance concern is equally serious. AI enables the collection and analysis of location information, social media posts, and other data to reconstruct people’s movements, associations, and habits at a scale previously unimaginable. The inferences AI draws from that data are not facts. They are patterns, and patterns reflect assumptions. Those assumptions often have a demographic shape. The communities most likely to be swept into a mass surveillance dragnet are rarely the ones with the resources to challenge it.
The Friends of the Court
Courts invite outside parties to submit what are called amicus curiae briefs, which means “friend of the court” briefs. These are written arguments from organizations or individuals who are not parties to the case but have something relevant to say. In most preliminary injunction hearings, a handful arrive. In this one, they poured in from directions nobody anticipated.
Start with the military brief, because it is the one that should give the administration the most pause.
Twenty-two retired senior military officers filed in support of Anthropic. The list included former secretaries of the Air Force, the Army, and the Navy, a former Coast Guard commandant, and Michael Hayden, a retired four-star Air Force general who also served as Director of the CIA, Director of the NSA, and Director of National Intelligence. These are not people with a habit of filing against the Pentagon. They did so here because they concluded the Pentagon’s actions were a misuse of military authority, not an exercise of it.
Their brief was direct. They described the supply chain risk designation as “an extraordinary and unprecedented step” that required “firm grounding” in law. They wrote that the Secretary’s conduct amounted to “retribution against a private company that has displeased the leadership,” and concluded that “far from protecting U.S. national security, the Secretary’s conduct here threatens the rule-of-law principles that have long strengthened our military.”
Read that again. Former secretaries of the armed services and a former CIA director told a federal court that the Secretary of Defense’s actions were weakening the military, not protecting it. That is not a political statement. That is a professional judgment from people who spent careers building the institutions Hegseth claims to be defending.
The First Amendment coalition came from a completely different direction, and it was equally unusual. The Foundation for Individual Rights and Expression, known as FIRE, filed jointly with the Electronic Frontier Foundation, the Cato Institute, the Chamber of Progress, and the First Amendment Lawyers Association. FIRE and Cato are libertarian-leaning organizations. The Electronic Frontier Foundation comes from the civil liberties left. They signed the same document.
Their brief called the Pentagon’s actions “a textbook violation of Anthropic’s First Amendment rights” and warned that if the designation stood, it would create “a culture of coercion, complicity, and silence, in which the public understands that the government will use any means at its disposal to punish those who dare to disagree.” They also raised a more provocative legal argument: that Claude’s outputs reflect Anthropic’s values and design choices, and that forcing Anthropic to strip out those choices is a form of compelled speech, the same constitutional violation as forcing a newspaper to publish something it does not want to print. Judge Lin did not need to reach that argument to rule, but it is in the record and it will matter on appeal.
Then there was the brief nobody saw coming. A group of Catholic moral theologians filed in support of Anthropic. The authors were Charles Camosy of the Catholic University of America, Joseph Vukov of Loyola University Chicago, Brian J.A. Boyd, and Brian Patrick Green, who teaches ethics at Santa Clara University’s Graduate School of Engineering. Their argument was not primarily about the law. It was about moral responsibility.
They wrote that “decisions affecting human life, freedom, and dignity must remain the responsibility of human actors,” and that Anthropic’s refusal to allow an AI system to make lethal decisions without human oversight was not corporate obstruction. It was ethics, grounded in just war doctrine and Catholic social teaching about the limits of technological autonomy. They described Anthropic as “a responsible and moral corporate citizen, not a threat to the safety of the American supply chain.”
One of the authors put it plainly before the ruling came down. “You can imagine an alternate universe where Dario Amodei just said, okay, we’ll sign it, it’s no big deal. They would be doing fine as a business, and the rest of the world would not be talking about AI ethics right now.” He is right. Anthropic chose a harder path.
Former national security officials filed arguing the designation actually undermines national security by threatening the public-private relationships on which advanced military capabilities depend. One hundred forty-nine former federal judges, including Michael Luttig and Nancy Gertner, filed arguing that there is no national security exception to ordinary judicial review, and that courts must not surrender their oversight role to the executive branch. Values-led investors filed. Small software developers filed expressing confusion about whether using Claude in any product would expose them to legal liability under the sweeping ban.
Step back and look at who filed: libertarians, progressive civil liberties advocates, Catholic moral theologians, former CIA directors, former service secretaries, former federal judges, and startup developers. They disagree about nearly everything else. They agreed on this.
The Paperwork Problem
Even setting the First Amendment aside, the supply chain risk designation was procedurally indefensible on its own terms.
The law requires the Secretary of Defense to do certain things before invoking this designation. He must consult with procurement officials. He must make a written determination that there is no less drastic option available. And he must notify the relevant congressional committees with a specific explanation of what alternatives were considered and why they were rejected.
None of that happened in any meaningful way. The letters sent to Congress were identical form letters with no discussion of alternatives. At oral argument, the government’s own lawyer admitted the letters did not contain the analysis the law requires.
The timing made things worse. Secretary Hegseth announced the designation as “final” and “effective immediately” on February 27. The internal memorandum supposedly justifying the designation was not signed until March 2. The formal letter notifying Anthropic did not arrive until the evening of March 4. The government announced the conclusion before it completed the analysis required to reach it.
There was also a remarkable email that surfaced in the record. The day after the supply chain designation was finalized, Under Secretary Emil Michael, who had written the memorandum describing Anthropic as presenting an “unacceptable national security threat,” sent Dario Amodei a friendly note reviewing draft contract language. He wrote that he thought the parties were “very close” to a deal. The court noted it was “exceedingly difficult to square” that email with the simultaneous characterization of Anthropic as a hostile actor. That is a considerable understatement.
What the Order Actually Does
The injunction does not declare Anthropic the winner. It restores the situation to what it was before February 27, 2026, while the underlying case plays out. The court was explicit that it does not require the Pentagon to use Anthropic’s products, does not prevent the Pentagon from switching to a different AI vendor, and does not prevent the government from taking any action it could lawfully have taken before this dispute began.
What it prevents is the government from treating a contract disagreement as a national security emergency, and from using a law designed for foreign saboteurs to punish an American company for speaking publicly about how its own product should be used.
A separate case involving a different legal authority is proceeding in a federal appeals court in Washington. That case continues on its own schedule, and the legal fight is far from finished.
The Larger Point
The government’s position in this case, stripped to its core, was that a company accepting federal contracts surrenders its right to public advocacy on issues related to those contracts, and that any statement the government finds inconvenient can be reframed as a national security threat.
That is a remarkable position. It would mean that any technology company working with the government risks existential retaliation if it says publicly that its product has limits, or that certain uses of its technology raise ethical concerns it is not willing to ignore for the right price.
There is something else worth saying here, something that gets lost in the legal arguments and the First Amendment doctrine and the procedural defects.
AI is genuinely useful. In medicine, it helps diagnose disease earlier. In law, it helps surface relevant precedent faster. In science, it accelerates research that would have taken years. In everyday professional life, it helps people do their work better and more efficiently. None of that is in dispute, and none of it is what this case is about.
The question this case forces into the open is a different one. Speed is a virtue when the underlying task is one you want done faster. It is not obviously a virtue when the task is killing. Striking 1,000 targets in 24 hours is an extraordinary technical achievement. Whether it is a good outcome depends entirely on whether those 1,000 targets were correctly identified, which is a question the speed itself makes harder to answer, not easier. The faster the cycle runs, the less time exists for the kind of human judgment that catches errors before they become irreversible.
Someone decided to compress the kill chain. Someone decided that deliberation was latency to be engineered out. Someone decided to build a system that produces 1,000 targeting decisions an hour. Someone decided speed of killing was more important than accuracy. Someone decided to start the war, Calling it an AI problem gives all of those decisions, and all of those decision-makers, a place to hide.
That is the most important thing the Guardian’s reporting on the Minab school makes clear. The public conversation about Claude and the school bombing was organized around the wrong question. Claude was not involved in the targeting. The cause was a database entry that had not been updated in a decade, running through a system built to eliminate the pause in which someone might have noticed. The constitutional questions about who authorized this war, and the legal questions about whether that strike constitutes a war crime, were displaced by a technical debate that is easier to ask and impossible to answer in the terms it set.
The retired generals and service secretaries who filed in this case understood what was really at stake. The Catholic theologians who filed understood it from a different angle. Both groups arrived at the same place.
The Shajareh Tayyebeh primary school had between 175 and 180 people in it on the morning of February 28, 2026. Most of them were girls between the ages of seven and twelve. The building had been a school for at least a decade. A database said it was a military facility. Nobody updated the database. The system was not built for the kind of pause that might have caught that. The company that had insisted its product was not ready to be part of a system like that, and had said so publicly, was declared a threat to national security for saying it.
For now, a federal court has said the government cannot punish a company for telling the truth about what its own technology can and cannot safely do. Whether that holding survives the next round of appeals, and whether any of this changes how the kill chain is built and governed going forward, are different questions.
But at least someone asked them.
Sources
Anthropic PBC v. U.S. Department of War, No. 3:26-cv-01996-RFL, Order Granting Motion for Preliminary Injunction (N.D. Cal. Mar. 26, 2026) (Dkt. No. 134)
Anthropic PBC v. U.S. Department of War, No. 3:26-cv-01996-RFL, Preliminary Injunction Order (N.D. Cal. Mar. 26, 2026) (Dkt. No. 135)
Anthropic PBC v. U.S. Department of War, No. 3:26-cv-01996-RFL, Complaint for Declaratory and Injunctive Relief (N.D. Cal. Mar. 9, 2026) (Dkt. No. 1)
Brief of Amici Curiae Former Service Secretaries and Retired Senior Military Officers, Anthropic PBC v. U.S. Department of War, No. 3:26-cv-01996-RFL (N.D. Cal. Mar. 10, 2026)
Inside the Courtroom: What Happened at the Anthropic v. Department of War Preliminary Injunction Hearing, Ash Talks Substack (Mar. 24, 2026), ashtalks.substack.com
Brief of Amici Curiae Foundation for Individual Rights and Expression, Electronic Frontier Foundation, Cato Institute, Chamber of Progress, and First Amendment Lawyers Association, Anthropic PBC v. U.S. Department of War, No. 3:26-cv-01996-RFL (N.D. Cal. Mar. 13, 2026)
Brief of Amici Curiae Catholic Moral Theologians and Ethicists, Anthropic PBC v. U.S. Department of War, No. 3:26-cv-01996-RFL (N.D. Cal. Mar. 13, 2026)
Brief of Amici Curiae Former U.S. National Security Officials, Society for the Rule of Law, Anthropic PBC v. U.S. Department of War, No. 3:26-cv-01996-RFL (N.D. Cal. 2026), available at societyfortheruleoflaw.org
AI Got the Blame for the Iran School Bombing. The Truth Is Far More Worrying, The Guardian (Mar. 26, 2026), theguardian.com
AI Targeting System Doubles Pace of U.S. Strikes in Iran, AZFamily/Arizona State University Future Security Initiative (Mar. 25, 2026), azfamily.com
AI at War in Iran: Ruthless Targeting Machine or Risky Shortcut?, The National (Mar. 11, 2026), thenationalnews.com
The Military’s Use of AI, Explained, Brennan Center for Justice (2026), brennancenter.org
Military AI as “Abnormal” Technology, Lawfare (Mar. 2026), lawfaremedia.org
The Risks and Inefficacies of AI Systems in Military Targeting Support, ICRC Humanitarian Law and Policy Blog (Sept. 4, 2024), blogs.icrc.org