.png)
What happened when the world's most anxious AI company met the world's most belligerent Government in a San Francisco courtroom? Day one of a hearing where the outcome will impact us all and go some way to define the role of AI in society, including those folks minding their own business in Greenland.
This is Part 2 of our series on the Anthropic vs US Government case.
- Part 1: The Anthropic-Pentagon Showdown: How America's Most Safety-Obsessed AI Company Became a National Security Threat
- Part 2: Anthropic vs the US Government: Day One in the Courtroom (you are here)
- Part 3: Anthropic Wins: Pentagon AI Ban Ruled Illegal
I want to begin, if I may – and I rather think I may, since this is my blog and you have already scrolled this far – by pointing out that this situation is completely barking mad. We have, on one side, a responsible company, Anthropic, with a clear set of values and social conscience, founded by people who left another AI company because they were worried about AI safety, and then immediately went and built a very powerful AI tool.
And on the other side, we have the U.S. Government. Now, I don't want to be unkind about these folks as I've visited the country many times, I have friends there, lovely people, love the baseball, tremendous food portions – but as an institution for comprehending anything involving tech, well it does have certain outrage limitations.
Anthropic wants a judge to issue an injunction to pause the Pentagon's supply chain risk designation and President Trump's directive banning federal agencies from using its Claude AI models. On day one of the hearing, on a sunny day in San Francisco, District Judge Rita Lin said her concern is that Anthropic is being 'punished' and questioned if the Department of War has violated the law. The Pentagon's decision to blacklist Anthropic looks like an attempt to cripple the company, she added.
During the hearing, Lin asked lawyers on both sides a number of questions about the details of the case. She said her concern is whether Anthropic is being punished for publicly criticising the Government's contracting position. Judge Lin said: "I see the question in this case as being whether the Government violated the law."
The Government's Argument
Eric Hamilton, lawyer for the U.S. Government, responded: "We have come to worry that Anthropic may in the future take action to sabotage or subvert IT systems – which is why the company was designated a supply chain risk. What happens if Anthropic installs a kill switch or functionality that changes how it functions? That is an unacceptable risk."
Later in the hearing, Lin pressed Hamilton about when the Government views a supply chain risk designation as the appropriate course of action:
"What I'm hearing from you is that it's enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy. That seems a pretty low bar."
Anthropic has argued that there is no basis to consider the company a supply chain risk and also said it is being unfairly retaliated against because it demanded that the Department of War not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use the AI models for such purposes.
"Anthropic is not just acting stubbornly. It's not just refusing to agree to contracting terms. Instead, it's raising concerns about how we use its technology in military missions," Hamilton stated. "WE will decide the fate of our Country – NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about," added Trump after day one had closed, evidencing his gravitas and continued presidential style of leadership.
Silicon Valley Meets Capitol Hill
As we saw in day one of the hearing from interviews outside the courthouse, the average congressman's grasp of how a neural network functions is roughly equivalent to my understanding of why Americans put ice in absolutely everything.
Senator, I'm not able to explain how the model works in terms of a series of tubes. It's more like... well, it's just not like that. Not off to a good start.
The hearing is the Big Thing this week. Nothing quite illustrates the chasm between Silicon Valley and Capitol Hill like a Congressional hearing. Imagine, a very earnest young engineer in a San Francisco 49ers vest attempting to explain transformer architecture to a senator who, last week, asked a technology CEO whether he should turn his internet off at night to save energy. The senator meant well. The senator is trying very hard. The senator is doing his best. Which is both admirable and absolutely terrifying.
Anthropic's position in all this is magnificently principled and elegantly ideological. The company's entire raison d'être, and I use the French because it adds a certain je ne sais quoi to the ridiculousness, is that AI might destroy humanity, which is why they are building it. Maybe I got that wrong, although this is stated openly in their court documents with a straight face. The logic, as far as I can deduce, goes: if someone is going to build potentially civilisation-ending technology, it had better be us, because we are the responsible ones.
Meanwhile, the Government's position (and imagine this being said by Tetchy Trump in his own echo chamber_ is that it would like to own and regulate AI like we'd like to own Greenland and indeed the Internet itself, but first it would like to understand what AI is, and could someone please explain it more slowly, possibly with diagrams, and also – and this is a genuine request that emerged in the hearing – could the AI show its workings, the way children do in maths exams?
The answer to this, delivered with heroic patience and aplomb by a software engineer in a 49ers vest who went home and lay very still on a floor in a dark room, is: "we are working on interpretability and interoperability," which is the polite technical way of saying No, but we're terribly sorry about it.
Weapons, Surveillance, and Export Controls
The specific matters of contention are wonderfully complex. Trump has also added the question of export controls – that is, whether American AI ought to be shared with other countries, particularly ones that America finds geopolitically vexing.
Anthropic's view is nuanced, carefully considered, and approximately thirty pages long. The Government's view, which runs to forty pages, is that China Should Not Have This Thing, which has the advantage of being short enough to fit on a placard for a demo in Times Square by students from Cornell, and the disadvantage of not quite capturing the full complexity of C21st global technology transfer.
Then there is the matter of what the AI actually says. Congress would like it to say certain things and not say other things. Anthropic would also like it to say certain things and not say other things. These overlap in the way that all Venn diagrams involving the Government and tech companies overlap: slightly, in the middle, in an area that everyone can agree on and that contains approximately nothing interesting.
The Government wants the AI not to help make bombs. Anthropic also wants the AI not to help make bombs. Tremendous progress has been made. Now we move on to the vastly more complex question of whether the AI should be allowed to enable covert surveillance of citizens, and things begin to get philosophical rather quickly.
What I find most poignant is the basic human drama underneath it all. Anthropic, a collection of smart, intelligent folks who are genuinely frightened about what a dystopian future could look like with what they are building – but building it anyway, because they believe that if they stop, someone less frightened will build it instead, but it won't be as good and they will make all the money instead of them.
This is not a comfortable position, the kind of position that sends you to a therapeutic beer-drinking session and hang out with the sea lions at Pier 39, and I suspect several Anthropicans are considering this, which is very sensible of them.
And you have, in the Government, a collection of people who have been elected to make hard decisions about hard things without using AI, and who have now been handed a hard thing that, by the admission of its own creators, may be one of the most consequential things ever made. They are not stupid. Some of them are quite clever. They are simply operating several conceptual frameworks behind where the situation has got to, which is an experience many of us share entirely with AI.
There is a National AI Strategy which the attorneys for the U.S. Government tabled at the hearing, which looked impressive when printed out and in shiny leather binders, and which will be approximately as relevant to what actually happens with AI as a 1997 government report on the future of the compact disc.
This is not a criticism. Strategies are necessary, someone has to write them – it is simply an observation that the thing being strategised about has the habit of changing faster than the strategy can be approved, reviewed, redacted, spell-checked, amended, leaked to the Chinese, sent back for consultation, circulated to relevant departments, argued over, revised, and finally published.
Anthropic's Responsible Scaling Policy
Anthropic, for their part, has a thing called a Responsible Scaling Policy – nothing to do with climbing Mount Rushmore – which commits them to certain behaviours at certain capability levels, defined by something called an AI Safety Level, which goes from one to four, with four being (and I am reading from the actual document) a point at which "many instances of autonomous Claude could potentially compress decades of scientific progress into just a few years."
Now. Press pause. I want to sit with that sentence for a moment. Decades of scientific progress in a few years. And these are the responsible people. I dread to imagine what the irresponsible people are planning.
The authoritarianism and nationalism Trump espouses is a dangerous backdrop to the democratisation AI offers in providing an enabling technology to benefit everyone. America is locked in a worldwide battle for influence and under Trump, you suspect he could become a bad guy in an AI shootout. Anthropic's stance is laudable, but not without a degree of vested interest and grandstanding.
This experiment in legislating C21st digital civilisation will end shortly, hopefully before Elon weighs in with an opinion. On a note of cautious, mild optimism, both parties (the earnest AI company and the baffled Government) are at least talking to each other. OK, maybe they're shouting, as all Americans are loud.
This is more than can be said for most parties in most disputes about most important things. The conversations I observed yesterday were confused, contradictory, and occasionally farcical. But they are happening. In the history of human beings trying to sort things out before they blow themselves up, talking is a good thing.
Judge Lin said she expects to issue an order on Anthropic's motion in the next few days. Read Part 3 for the ruling and what the judge said.
Continue the series:
- Part 1: The Anthropic-Pentagon Showdown
- Part 2: Anthropic vs the US Government: Day One in the Courtroom (you are here)
- Part 3: Anthropic Wins: Pentagon AI Ban Ruled Illegal

.png)
.png)