Anthropic statement here indicates Pentagon asked for those use cases, which they rejected, and which OpenAI is now enabling
This feels like cyberpunk corpo wars.
There’s no reason to take Anthropic at its word, they probably just wanted more money. This analysis is vibes based but given the whole snowden thing i don’t have a reason to believe llms haven’t already been deployed for ongoing mass surveillance and murder.
Anthropic is definitely a bloodthirsty corpo, they just couldn’t keep up with the US admin.
CEO: “Anthropic has therefore worked proactively to deploy our models to the Department of War […] Claude is extensively deployed across the Department of War”
Anthropic already had contracts with Palantir. It’s hard to try and take the moral high ground after that.
Let’s make the air poison and boil the water to make autonomous kill bots.
Huh. Mordor is making its orcs.
yea no, letting a slop toaster roam an area in a murder bot body will certainly not backfire
If you run away from the killbot you’ll be charged with treason
Altman is so full of shit.
Hours after the Trump administration’s comments, OpenAI CEO Sam Altman posted on X Friday night that the company had struck a deal with the Department of Defense to deploy its models on the department’s classified networks. Altman said the Department of Defense “displayed a deep respect for safety and a desire to partner to achieve the best possible outcome” in their interactions.
“AI safety and wide distribution of benefits are the core of our mission,” Altman wrote. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW [Department of War] agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Altman also said OpenAI will create “safeguards to ensure our models behave as they should, which the DoW also wanted.” It is unclear if or how the safety-focused measures in OpenAI’s agreement differ from those in the Anthropic negotiations.
human responsibility for the use of force, including for autonomous weapon systems
Autonomous weapon systems that ask a human for confirmation before killing something is not really autonomous is it? So why say autonomous at all?
Do you think operators won’t just click OK every time?
I’m commenting on how they’re obviously talking shit. They will be autonomous. The human part is a lie.
Yeah, they might make a PowerPoint about their human in the loop system, but then there’ll be a big AUTOKILL toggle next to the operator “for debugging purposes” that oddly enough doesn’t log anything.
“Hi we would like to use our text classification algorithm to run your murderbots”: utterly deranged statement hall of fame contender that somehow managed to impinge on our reality.
Who is the bloodthirsty ghoul in the photo?









