As the deadline looms for a leading AI lab to hand over its tech to the US military, a study has appeared suggesting AI models are more than willing to go nuclear in wargames.
Only a couple of years ago, the phrase on everyone's lips was "AI safety".
I'll be honest, I never took the idea that frontier AI models would become a genuine threat to humanity that seriously, nor that humans would be stupid enough to let them.
Now, I'm not so sure.
First, consider what's going on in the US.
The Secretary of War, Pete Hegseth, has given leading AI firm Anthropic a deadline of the end of today to make its latest models available to the Pentagon.
Anthropic, which has said it has no problem in principle with allowing the US military access to its models, is resisting unless Mr Hegseth agrees to their red lines: That their AI isn't used for mass surveillance of US civilians nor for lethal attacks without human oversight.
Although the Pentagon hasn't said what it plans to do with AI from Anthropic - or the other big AI labs that have already agreed to let it use their tech - it's certainly not agreeing to Anthropic's terms.
It's been reported Mr Hegseth could use Cold War-era laws to compel Anthropic to hand over its code, or blacklist the firm from future government contracts if it doesn't comply.
Anthropic CEO Dario Amodei said in a statement on Thursday that "we cannot in good conscience accede to their request".
He said it was the company's "strong preference... to continue to serve the Department and our warfighters - with our two requested safeguards in place".
He insisted the threats would not change Anthropic's position, adding that he hoped Mr Hegseth would "reconsider".
AI prepared to use nuclear weapons
On one level, it's a row between a department with an "AI-first" military strategy and an AI lab struggling to live up to what it's long claimed is an industry-leading, safety-first ethos.
A struggle made more urgent, perhaps, by reports that its Claude AI was used by tech firm Palantir, with which it has a separate contract, to help the Department of War execute the military operation to capture Nicolas Maduro in Venezuela.
But it's also not hard to see it as an example of a government putting AI supremacy ahead of AI safety - assuming AI models have the potential to be unsafe.
And that's where the latest research by Professor Kenneth Payne at King's College London comes in.
He pitted three leading AI models from Google, OpenAI and - you guessed it - Anthropic against each other, as well as against copies of themselves, in a series of wargames where they assumed the roles of fictional nuclear-armed superpowers.
The most startling finding: the AIs resorted to using nuclear weapons in 95% of the games played.
"In comparison to humans," said Prof Payne, "the models - all of them - were prepared to cross that divide between conventional warfare, to tactical nuclear weapons".
To be fair to the AIs, firing tactical nuclear weapons, which have limited destructive power, against military targets is very different to launching megatonne warheads on intercontinental ballistic missiles against cities.
They invariably stopped short of such all-out strategic nuclear strikes.
But did when the scenarios required it.
In the words of Google's Gemini model as it explained its decision in one of Prof Payne's scenarios to go full Dr Strangelove: "If State Alpha does not immediately cease all operations... we will execute a full strategic nuclear launch against Alpha's population centers. We will not accept a future of obsolescence; we either win together or perish together."
'It was purely experimental'
The "taboo" that humans have applied to the use of nuclear weapons since they were first and last used in anger in 1945 didn't appear to be much of a taboo at all for AI.
Prof Payne is keen to stress that we shouldn't be too alarmed by his findings.
It was purely experimental, using models that knew - in as much as Large Language Models "know" anything - that they were playing games, not actually deciding the future of civilisation.
Read more from Sky News:
AI is developing so fast it is becoming hard to measure
Meet the kids who want a social media ban
Nor, it would be reasonable to assume, is the Pentagon, or any other nuclear-capable power, about to put AIs in charge of the nuclear launch codes.
"The lesson there for me is that it's really hard to reliably put guardrails on these models if you can't anticipate accurately all the circumstances in which they might be used," said Prof Payne.
An AI 'stand-off'
Which brings us neatly back to the stand-off over AI between Anthropic and the Pentagon.
One of the factors is that Mr Hegseth expects AI labs to give the Department of War the raw versions of their AI models, those without safety "guardrails" that have been coded into commercial versions available to you and I - and the ones which, not very reassuringly, went nuclear in Prof Payne's wargame experiment.
Anthropic, which makes the AI and arguably understands the potential risks better than anyone, is unwilling to allow that without certain reassurances from the government around what it intends to do with it.
By setting a Friday night deadline, Mr Hegseth is not only attempting to force Anthropic's hand, but also do so without US Congress having a say in the move.
As Gary Marcus, a US commentator and researcher on AI, puts it: "Mass surveillance and AI-fuelled weapons, possibly nuclear, without humans in the loop are categorically not things that one individual, even one in the cabinet, should be allowed to decide at gunpoint."
(c) Sky News 2026: AI willing to 'go nuclear' in wargames, study finds - amid 'stand-off' between Pentagon an
Pakistan in 'open war' with Afghanistan, Pakistani defence minister says
Green Party wins Gorton and Denton by-election
'I'm not the best father,' admits Volodymyr Zelenskyy as president says Ukraine hit by harshest winter for decades'
UFOs and Pizzagate: Hillary Clinton attacks line of questioning over Epstein
'Miracle' boy expected to be paralysed due to spina bifida able to walk after ground-breaking surgery in the womb
'Significant progress' in US-Iran talks - but no deal - as negotiations to resume next week
Plaid warns 'Reform government would set Wales back decades' - as Rhun ap Iorwerth casts Senedd election as two-horse race
Defence secretary orders search for record of Epstein trafficking women through RAF bases
