Tech experts expect artificial intelligence to transform many aspects of human life in the years ahead. At long last there is a glimmer of hope that artificial intelligence can be developed in a manner that generates public trust and ensures cybersecurity.Big Tech leaders have long acknowledged the societal risks of the technology. Friday’s announcement at the White House that seven leading U.S.
• Independent testing of AI products for safety before they are released, with a primary focus on cybersecurity and biosecurity.• A commitment to develop and deploy advanced AI systems to help address society’s greatest challenges, including cancer prevention and mitigating climate change. When push comes to shove, however, the voluntary nature of the agreement makes it unenforceable. The temptation will be great for tech firms to ignore the safeguards if they believe doing so will put them atThe agreement also contains glaring problems and omissions.
The agreement fails to include a commitment that companies disclose data scraped from the internet to train their AI systems. Artists, writers and musicians have been up in arms over the ability of AI to appropriate their works and likenesses. They want ways to opt out of AI data grabs and protect their creative works.