researcher Yoshua Bengio told a U.S. Senate subcommittee Tuesday that AI systems capable of human-level intelligence could be a few years away and pose potentially catastrophic risks, as governments around the world debate how to control a technology that is alarming some of its earliest developers.
Some of the necessary components of this intelligence are still missing, such as the ability to reason. “Yet my own work in this space leads me to believe that AI researchers could be close to a breakthrough on these missing pieces,” Prof. Bengio wrote, who is the scientific director of the Mila – AI systems can also be misaligned and cause unintended consequences along the way to achieving a programmed objective. In a few years, it is possible that a “loss of control” scenario could emerge, where an AI system concludes that it must avoid being shut off in order to achieve its goal, and “conflict may ensue” if someone intervenes.
Recently, Prof. Bengio has emerged as a vocal advocate for new laws governing AI and was one of the prime signatories on anurging the federal government to pass Bill C-27, which contains the Artificial Intelligence and Data Act , a framework for regulating the technology. Lawmakers in the European Union, meanwhile, agreed to a draft version of the EU AI Act in June, which regulates applications based on the level of risk. Some uses of AI, such as facial recognition in public spaces, are banned outright while “high-risk” systems that could negatively affect safety and human rights will have to be registered. Makers of generative AI applications like ChatGPT will have to publish summaries of copyrighted data used in training.