Austin, United States—For people at the trend-setting tech festival here, the scandal that erupted after Google’s Gemini chatbot cranked out images of Black and Asian Nazi soldiers was seen as a warning about the power artificial intelligence can give tech titans.
“We definitely messed up on the image generation,” Google co-founder Sergey Brin said at a recent AI “hackathon,” adding that the company should have tested Gemini more thoroughly. Mistakes made in an effort at cultural sensitivity are flashpoints, particularly given the tense political divisions in the United States, a situation exacerbated by Elon Musk’s X platform, the former Twitter.
In the coming decade, the amount of information—or misinformation—created by AI could dwarf that generated by people, meaning those controlling AI safeguards will have a huge influence on the world, Weaver said.Karen Palmer, an award-winning mixed-reality creator with Interactive Films Ltd.
With Gemini, Google engineers tried to rebalance the algorithms to provide results better reflecting human diversity.“It can really be tricky, nuanced and subtle to figure out where bias is and how it’s included,” said technology lawyer Alex Shahrestani, a managing partner at Promise Legal law firm for tech companies.