"the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target."
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
Amodei looks absolutely prescient for taking a stand against use of Claude in the kill chain. Not to mention how utterly foolish DoD looks declaring Claude to be a national security threat while simultaneously using to choose targets. No wonder they got humiliated in court.
Well, to people who don't believe in precognition, it sounds like Anthropic had quality control engineers dedicated to their military clients' usage. Basically running through the prompts and inspecting the answers and digging deeper how their chatbots gave those answers. Somebody must have pressed the high-alert button, resulting in Anthropic taking a stance.
Certainly possible but I'd assume DoD expressly forbid anyone looking at their usage and Anthropic had to support that to win their contract. They may have gotten wind of what they were doing somehow.
you, today, can use Claude in Amazon Bedrock, and the way that works is, if you want it to be this way: the piece of code and model weights and whatever other artifacts are involved, they are run on Bedrock. Bedrock is not a facade against Claude's token-based-billing RESTful API, where Anthropic runs its own stuff. In the strictest sense, Bedrock can be used as a facade over lower level Amazon services that obey non-engineering, real world concerns like geographic boundaries / physical boundaries, like which physical data center hardware is connected by what where / jurisdictional boundaries, whatever. It's multi-tenancy in the sense that Amazon has multiple customers, but it's not multi-tenancy in the sense that, because you want to pay for these requirements, Amazon has sorted out how to run the Claude model weights, as though it were an open-weights model you downloaded off Hugging Face, without giving you the weights, but letting you satisfy all these other IP and jurisdictional and non-technical requirements that you are willing to pay for, in a way that Anthropic has also agreed.
This is what the dispute with the Pentagon is about, and what people mean when they say Claude is used in government (it is used in Elsa for the FDA for example too). Anthropic doesn't have telemetry, like the prompts, in this agreement, so they have the contract that says what you can and cannot use the model for, but they cannot prove how you use the model, which of course they can if you used their RESTful API service. They can't "just" paraphrase your user data and train on it, like they do on the RESTful API service. There are reasons people want this arrangement ($$$).
The vendor (Palantir) can use, whatever model it wants right? It chose Claude via "Bedrock." I don't know if they use Claude via Bedrock. Ask them. But that's what they are essentially saying, that's what this is about. Palantir could use Qwen3 and run it on datacenter hardware. Do you understand? It matters, but it also doesn't matter.
It's a bunch of red herrings in my opinion, and this sort of stuff being a red herring is what the article is mostly about.
As I understood it, Anthropic was prime on their own contract which the DoD infamously unsuccessfully tried to renegotiate mid-term. Are you saying that Palantir had some subcontracted use of Claude independent of Anthropic's existing contract?
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.