“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the message in its entirety, which was coordinated by the Center for AI Safety, a U.S. non-profit organization. An accompanying explanation notes that it can be difficult to express concerns about the most severe risks posed by AI, and that the statement is designed to open discussion.
Indeed, the notion that AI models will someday be powerful enough to outwit humans, run amok, or otherwise wreak havoc on humanity can be a divisive one. Some experts believe the technology is far too underdeveloped and that such scenarios are highly unlikely.
Since then, Mr. Hinton has given interviews to media around the world about how quickly AI models, particularly those that can generate text and images, have advanced in recent months and the potential threats as these models become more powerful. “There are no simple solutions that I can see,” he said. “With climate change, there’s fairly simple solutions. They’re unpalatable, like stop burning carbon, but they would work. With this, it’s not so obvious what would work.
There is, however, a robust and ongoing body of research into designing safe AI systems. Some of the signatories, including OpenAI’s Sam Altman, contend that the risks can be managed in order for society to reap the benefits of the technology, such as improved drug discovery and disease detection. “I don’t think it works to just say humanity should never build AI,” he told The Globe at
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: globeandmail - 🏆 5. / 92 Read more »