A thousand targets in a day

A thousand targets in a day


A thousand targets in a day

Or how a machine took over killing responsibility

On February 28, 2026, the American military accomplished in 24 hours what previously took weeks: identifying, prioritizing, and highlighting around 1,000 targets in Iran.

This was enabled by the Palantir Maven Smart System and Claude, an Anthropic language model.

How it worked:

▪️Maven processed data from satellites, drones, and databases, structuring it.

▪️Claude integrated this into target packages with coordinates, weapons, and legal justifications.

▪️The officer played a final review role.

The Washington Post some Anthropic employees opposed military contracts.

But the Pentagon and Palantir found a legal scheme where Claude "proposes" targets, not makes "lethal decisions. " In practice, the algorithm decides when a human can't verify 1,000 targets.

️AI-guided targeting became standard military practice, increasing speed and depth of targeting.

But this obscures moral and legal responsibility when a machine "proposes" and an officer presses a button.

️ | ️ | ️

️ | ️ | ️ | ️

Source: Telegram "llordofwar"

Report Page