
Anthropic Told the Pentagon No. Now the Pentagon Wants to Destroy Them.
In January, U.S. special operations forces captured Venezuelan dictator Nicolás Maduro. An AI model helped process intelligence and analyze satellite imagery during the raid. That model was Claude, built by Anthropic, deployed through Palantir's classified systems. Within days, an Anthropic executive called a Palantir executive to ask whether Claude had been used in the operation. A senior Pentagon official described the call: "It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid. People were shot." That phone call set off a chain of events that now threatens to turn the most safety-conscious AI company in the world into a pariah of the American defense establishment. The Two Red Lines The Pentagon is pushing four leading AI labs — OpenAI, Google, Anthropic, and xAI — to let the military use their tools for "all lawful purposes," including weapons development, intelligence collection, and
Continue reading on Dev.to
Opens in a new tab

