Helen Worrell F.T. investigative reporter
It's July 2027 and China is on the verge of invading Taiwan. Autonomous drones with AI guidance capabilities are called upon to overcome the island's air defenses as a series of devastating AI-generated cyberattacks have cut off power supplies and key communications. Meanwhile, a massive disinformation campaign run by a pro-China AI meme farm is spreading across global social media, dampening protests over Beijing's aggression.
Scenarios like these have brought dystopian horror to the debate over the use of AI in warfare. Military commanders are hoping for a digital force that can operate faster and more accurately than human-controlled warfare. But there are concerns that as AI plays an increasingly central role, those same commanders will lose control of a conflict that escalates too quickly and has no ethical or legal control. Henry Kissinger, former US Secretary of State, spent his final years warning about the coming catastrophe of war controlled by artificial intelligence.
Recognizing and mitigating these risks is the military priority (some would say the “Oppenheimer moment”) of our time. There is an emerging consensus in the West that decisions regarding the deployment of nuclear weapons should not be outsourced to AI. UN Secretary-General Antonio Guterres went further, calling for a complete ban on fully autonomous lethal weapons systems. It is vital that regulation keeps pace with evolving technologies. But in the hype fueled by science fiction, it's easy to lose track of what's actually possible. As researchers from the Harvard Belfer Center note, AI optimists often underestimate the challenges associated with introduction of fully autonomous weapons systems. It is possible that AI's capabilities in combat are exaggerated.
Anthony King, director of the Institute of Strategy and Security at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military understanding. Even as the nature of war changes and remote technologies improve weapons systems, he insists, “the complete automation of war itself is simply an illusion.”
Of the three current military uses of AI, none involve full autonomy. It is being developed for planning and logistics, cyber warfare (sabotage, espionage, hacking and information operations; and, most controversially, weapon targeting, the application is already used on the battlefields of Ukraine and the Gaza Strip). Kyiv troops are using artificial intelligence software to guide drones capable of evading Russian jammers as they approach sensitive sites. The Israel Defense Forces has developed an artificial intelligence decision support system known as Lavender that helps identify 37,000 potential human targets in the Gaza Strip.
FT/MIT TECHNOLOGY REVIEW | Adobe STOCK
Clearly, there is a danger that the Lavender database reproduces bias in the data on which it is trained. But military personnel also have prejudices. One Israeli intelligence officer who used lavender claimed he had more faith in justice “statistical mechanismthan that of a grieving soldier.
Technical optimists developing artificial intelligence weapons even deny that any new controls are needed to control their capabilities. Keith Dear, a former British military officer who now runs strategic forecasting company Cassi AI, says existing laws are more than enough: “You have to make sure there's nothing in the training data that could cause the system to fail… when you're confident you'll deploy it – and you, the human commander, are responsible for anything they might do that goes wrong.”






