None of the big AI labs are your friends so don’t get too excited, but the Pentagon is, according to an anonymously sourced Axios story, threatening to quit using Anthropic AI tools because of the company’s “insistence on maintaining some limitations on how the military uses its models.”

The Pentagon source who spoke to Axios apparently said that of all the AI companies it deals with, Anthropic is the most “ideological.”

Keep in mind that Anthropic created Claude and Claude Code, and an unsettling pattern has begun where it releases a tweak for its vibe-coding systems, and Wall Street obediently sells stock in whatever kind of business its latest tools are trying to replace. This is a company that clearly wants to conquer the world, so it might not be a good idea to take any comfort in the many stories about how the people who work there are kinda uncomfortable with what conquering the world entails.