Updated: 17 seconds agoAlphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., May 14, 2024. Bloopers — some funny, others disturbing — have been shared on social media since Google unleashed a makeover of its search page that frequently puts AI-generated summaries on top of search results. Hilariously wrong information from Google’s new AI are showing you just how dumb.
These are mostly silly examples that in some cases came from people trying to coax Google’s AI into saying the wrong thing. I’ll explain why Google’s AI might tell you to eat glue and the lessons you should take from such mistakes.The technology behind ChatGPT and the “AI Overviews” in Google searches, which rolled out to all Americans last week, is called a large language model.
Sometimes Google’s AI isolates correct and useful information. Sometimes, particularly if there’s not much online information related to your search, it spits out something wrong. OpenAI said its accuracy rate has improved. Microsoft said its Copilot chatbot includes links in its replies, as Google does, so people can explore more. Microsoft also said it takes feedback from people and is improving Copilot.Willison suggested that twist on the Reagan-era “trust but verify” refrain for chatbots. Let’s put it on bumper stickers.