Rishi Sunak at last year’s AI safety summit at Bletchley Park. The summit brokered a voluntary testing agreement with tech firms including Google, Microsoft and Meta.Rishi Sunak at last year’s AI safety summit at Bletchley Park. The summit brokered a voluntary testing agreement with tech firms including Google, Microsoft and Meta.
A shift by tech companies to autonomous systems could “massively amplify” AI’s impact and governments need safety regimes that trigger regulatory action if products reach certain levels of ability, said the group., called “managing extreme AI risks amid rapid progress”, recommends government safety frameworks that introduce tougher requirements if the technology advances rapidly.
It also calls for increased funding for newly established bodies such as the UK and US AI safety institutes; forcing tech firms to carry out more rigorous risk-checking; and restricting the use of autonomous AI systems in key societal roles. “Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts,” according to the paper, published in the Science journal on Monday. “AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
“Companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems,” the experts said, adding that unchecked AI advancement could lead to the “marginalisation or extinction of humanity”.