“The machines rose from the ashes of the nuclear fire.” Skulls crushed under robot tank treads. Laser flashes from floating shadowy crafts. Small human figures scurrying to escape.
The trend toward incorporating AI into weapons systems is mostly due to its speed in integrating information to reach decisions; AI is already thousands of times faster than humans in many tasks, and will soon be millions, then billions of times faster. This “accelerating decision advantage” logic, the subtitle of the new AI strategy document, can apply across all kinds of weapons systems, including nuclear weapons.
I interviewed military and AI experts about these issues and while there was no consensus about the specific nature of the threat or its solution, there was a shared sense of very serious potential harm from AI in the use of nuclear weapons.issues from Gladstone AI, a company founded by national security experts Edouard and Jeremie Harris. I reached out to them about what I saw as some omissions in their report.
Despite this luminary’s hawkish recommendation, the U.S. did not in fact strike the Soviets, with either nuclear or conventional bombs. But will AI incorporated into conventional and nuclear weapons systems follow a hawkish game theory approach, à la von Neumann? Or will the programs adopt more empathetic and humane decision-making guidelines?