Recent advances in generative AI systems that can create text or imagery have triggered aamong companies adapting the technology for tasks like web search and writing recommendation letters. But the new algorithms have also triggered renewed concern about AI reinforcing oppressive social systems like sexism or racism, boosting election disinformation, or becoming tools for cybercrime.
It’s unclear how much the agreement will change how major AI companies operate. Already, growing awareness of the potential downsides of the technology has made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publicizes some information, like the intended use cases and ethical considerations. Meta and OpenAI sometimes invite external experts to try and break their models in an approach dubbed red-teaming.
“Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices—such as red-team testing and the publication of transparency reports—that will propel the whole ecosystem forward,” Microsoft president Brad Smith said in a blog post., a concern that is now commonly cited in research on the impact of AI systems.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: verge - 🏆 94. / 67 Read more »
Source: Reuters - 🏆 2. / 97 Read more »
Source: KSLcom - 🏆 549. / 51 Read more »
Source: IntEngineering - 🏆 287. / 63 Read more »