Do we want a world in which algorithms hold the trigger?
Another week, and another display of the United States administration’s hypocrisy. It just tried to strong-arm Anthropic into handing over the ethical red lines of the artificial intelligence company along with its code. The US administration said: remove the safeguards preventing your Claude AI from being used for fully autonomous targeting. The Pentagon has a contract worth hundreds of millions of dollars on the line. And when Anthropic said “no”, the White House ordered every federal agency to “IMMEDIATELY CEASE” using Anthropic’s tech.
The Pentagon threatened to label the company a “supply chain risk” — the kind of scarlet letter usually reserved for Huawei or Kaspersky. It even invoked the Defense Production Act, a Cold War relic designed as a cudgel to force industry into compliance.
A federal judge has since called the US administration’s actions “unconstitutional retaliation”. But the damage is done. The message to every other AI wunderkind in the Valley is self-evident: Build a kill switch, and we will make sure you are the one who gets switched off.
This isn’t just a contract dispute. For years, the chattering classes have debated “meaningful human control” over lethal autonomous weapons — what diplomats call LAWS. The UN chief wants a treaty by 2026. The International Committee of the Red Cross warns of an accountability gap. And some researchers have laid out the nightmare in clinical detail.
First comes the accountability gap. If a machine hallucinates — as large language models are wont to do — and mows down a school bus instead of a missile launcher, who goes to The Hague? The commander who deployed it will blame the code. The programmer will blame the black box. The machine can’t form criminal intent or be court-martialed. As Anthropic’s CEO put it, frontier AI systems are simply “not reliable enough” for life-or-death decisions. They are brilliant pattern matchers with no moral compass.
Another concern is about distinction. An algorithm can be trained to spot a target. But it cannot understand why a tank is a “l(fā)egitimate” target, or why attacking a desalination plant nearby would constitute a “war crime”. The algorithm cannot show mercy. It cannot weigh competing obligations. And as we saw with some forces’ AI-enabled targeting in Gaza, when the system decides who is a militant based on opaque criteria, the result is mass civilian casualties and zero accountability.
And if we build these weapons, they will spread. Terrorist groups won’t need suicide bombers if they have a $1,000 quadcopter with facial recognition and a pre-programmed target list.
So what is the international community supposed to do? The US has been vocally against binding “rules”, calling them “barriers to innovation”, while building its own AI arsenal. Europe wants precaution. Some other major players are preparing for the worst-case scenario. Everyone wants to win.
Anthropic tried to do the responsible thing. It drew a line: no lethal autonomous warfare without human oversight. No mass surveillance of US citizens. And for that, the US administration tried to destroy it.
Some big names in AI have already dropped the no-weapons pledge, scrubbed “safety” from their mission, or shown their readiness to salute. They have heeded the law: If you play nice with the generals, you get the contracts. If you grow a conscience, you get the boot.
We are sleepwalking into a world where algorithms can pull the trigger. We need a legally binding treaty, a moratorium on unreliable systems, and strict and effective regulation on offensive autonomous weapons, if not an outright ban. But that requires political will — and right now, Washington’s will is aimed at the wrong target.
The killer robots are coming. And the only thing standing between us and an automated Armageddon might just be a few holdout coders and a federal judge with a backbone.
































