It’s not uncommon for an exciting new tech feature to debut with some bugs. But at least some of the problems with Google’s new generative-AI-powered search answers may not be fixable anytime soon, five AI experts told Tech Brief on Tuesday.
Google initially downplayed the problems, saying the vast majority of its AI Overviews are “high quality” and noting that some of the examples going around social media were probably fake. But the company also acknowledged that it was removing at least some of the problematic results manually, a laborious process for a site that fields billions of queries per day.
With AI Overviews, Google is trying to address language models’ well-known penchant for fabrication by having them cite and summarize specific sources., a professor at the Santa Fe Institute who researches complex systems. One is that the system can’t always tell whether a given source provides a reliable answer to the question, perhaps because it fails to understand the context. Another is that even when it finds a good source, it may misinterpret what that source is saying.
“I don’t know if the summaries are ready for prime time,” he said, “which by the way is good news for web publishers,” because it means users will still have reason to visit trusted sites rather than relying on Google for everything.