Google’s weird AI answers hint at a fundamental problem

  • 📰 washingtonpost
  • ⏱ Reading Time:
  • 56 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 26%
  • Publisher: 72%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

There’s no known fix for large language models making things up, experts say.

It’s not uncommon for an exciting new tech feature to debut with some bugs. But at least some of the problems with Google’s new generative-AI-powered search answers may not be fixable anytime soon, five AI experts told Tech Brief on Tuesday.

Google initially downplayed the problems, saying the vast majority of its AI Overviews are “high quality” and noting that some of the examples going around social media were probably fake. But the company also acknowledged that it was removing at least some of the problematic results manually, a laborious process for a site that fields billions of queries per day.

With AI Overviews, Google is trying to address language models’ well-known penchant for fabrication by having them cite and summarize specific sources., a professor at the Santa Fe Institute who researches complex systems. One is that the system can’t always tell whether a given source provides a reliable answer to the question, perhaps because it fails to understand the context. Another is that even when it finds a good source, it may misinterpret what that source is saying.

“I don’t know if the summaries are ready for prime time,” he said, “which by the way is good news for web publishers,” because it means users will still have reason to visit trusted sites rather than relying on Google for everything.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 95. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines