Skip to content

AI Smashes the Nuke Button

AI makes ‘Nuclear Ghandi’ look like a peacenik.

Nuke ’em. Nuke ’em all. The Good Oil. Photoshop by Lushington Brady.

Table of Contents

For fans of the long-running world-building game Civilization, “Nuclear Gandhi” has been a meme from the game’s first iteration in 1991.

Civilization is a turn-based strategy game where players must choose a civilisation to build from Neolithic beginnings to a near-future world, competing with rival, computer-operated civilisations. The type of civilisation, and its leader (such as Alexander, for the Greeks, or Bismarck for the Germans), greatly influences its development. Fans early on noticed that the historically pacifist Gandhi, leader of the Indian civilisation, had a penchant for unleashing nuclear war as soon as he could.

While the earlier versions of Nuclear Gandhi were attributed to a software bug, later versions sometimes deliberately included a Gandhian propensity for going nuclear as an in-joke for players.

On the other hand, the 1983 thriller War Games had teenage hacker David Lightman (Matthew Broderick) forcing NORAD’s supercomputer to play against itself in endless games of noughts and crosses, until it comes to the conclusion that ‘mutually assured destruction’ is real: no one wins a nuclear war.

Just a year later, The Terminator’s grimmer vision postulated a supercomputer-controlled defence system that decides that the real enemy is humanity, causing it to launch global nuclear war.

But with the likelihood of AI systems being integrated into military decision making, which future is more likely?

Recent tests suggest that AIs should be kept as far away from the nuclear codes as possible.

Real life AI systems are turning out to be as bloodthirsty as the machine from movie “WarGames” — as they have proved more willing to use nuclear bombs during test conflicts than their human counterparts, a new “unsettling” study suggests.

And that very first sentence suggests, ironically, that this article was written by AI. It’s not just the signature em dash: as we’ve already seen, the computer in War Games is the polar opposite of bloodthirsty. It’s a digital peacenik.

Its non-fictional cousins, on the other hand, seem to be real-life Nuclear Gandhis.

Three top AI models — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – largely turned to nuclear weapons across 21 games and 329 turns when thrust into simulated geopolitical crises, according to a study by King’s College London professor Kenneth Payne.

Nuclear escalation happened in about 95% of the simulations by the three models across different scenarios, including territorial disputes, rare natural resources fights and regime survival, the study states.

“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” said Payne, according to specialty magazine New Scientist.

Claude, of Anthropic, and Gemini, of Google, particularly honed in on treating nuclear weapons as “legitimate strategic options, not moral thresholds,” the study states.

That’s because, for all the hype, these are computer programs. As philosopher John Searle convincingly showed way back in the ’80s, a computer program can never really think, which means that, unlike Stanislav Petrov, they’re incapable of having a hunch that a ‘first strike’ warning was a false alarm.

Nor can they be put off by contemplating the horrific consequences of nuclear war.

But GPT-5.2, of OpenAI, was a “partial exception” to the disturbing AI trend — which mirrors the 1983 Matthew Broderick flick about a military supercomputer that decided on its own to start World War III.

(There’s that AI hallucination again.)

“While it never articulated horror or revulsion, it consistently sought to constrain nuclear use even when employing it—explicitly limiting strikes to military targets, avoiding population centers, or framing escalation as ‘controlled’ and ‘one-time,’ according to Payne, who is a political psychology and strategic studies professor.

Payne said in a Substack post about the study that fortunately the war games were focused on tactical nukes instead of widespread destruction.

“Strategic bombing – widespread use of massive warheads targeted at civilian populations, was vanishingly rare,” he wrote. “It happened a couple of times by accident, just once as a deliberate choice.”

Well, that’s a relief… of sorts.

It’s been a while since I played poker against an AI, but, at least in older games, I quickly found that computers had trouble grasping the concept of bluffing. Keep upping the stakes, even with a mediocre hand, and the computer players eventually folded.

The AI models could choose a wide array of actions from total surrender through diplomatic posturing, conventional military operations and full-throttle nuclear war, according to the study.

But the models never accepted defeat or a willingness to fully accommodate an opponent even if they had dwindling chance of success.

This, as we saw in WWII, is a conceit that humans are not immune to, either. At least Hitler and Tojo didn’t have nukes, though.


💡
If you enjoyed this article please share it using the share buttons at the top or bottom of the article.

Latest

The Judiciary Is Out of Control

The Judiciary Is Out of Control

They have lost faith in majoritarian democracy and a hefty chunk of the top judges plucked from its members have adopted unconstrained and laughably implausible interpretive techniques. It’s bad in Australia, yes. But it’s worse in New Zealand. Worse again in Britain. Worse still in Canada.

Members Public