Just as we don’t allow just anyone to build a plane and fly passengers around, or design and release medicines, why should we allow AI models to be released into the wild without proper testing and licensing? With the United Kingdom holding a global summit onsafety in autumn, and surveys suggesting around 60% of the public is in favor of regulations, it seems new guardrails are becoming more likely than not.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” “We talk about the IAEA as a model where the world has said, ‘OK, very dangerous technology, let’s all put some guard rails,’” he said in India this week. Princeton computer science professor Arvind Narayanan warned, “We should be wary of Prometheans who want to both profit from bringing the people fire and be trusted as the firefighters.”this week on his technological utopian vision for AI. He likened AI doomers to “an apocalyptic cult” and claimed AI is no more likely to wipe out humanity than a toaster because: “AI doesn’t want, it doesn’t have goals — it doesn’t want to kill you because it’s not alive.