How to deal with the tsunami of AI-generated hallucinations

  • 📰 globeandmail
  • ⏱ Reading Time:
  • 18 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 11%
  • Publisher: 92%

Noastack News

Technology Technology Latest News,Technology Technology Headlines

Problems ensue when humans and organizations uncritically use artificial intelligence content for tasks

In the 1970s, portable calculators transformed our ability to do math. Today, generative chatbots such as OpenAI’s ChatGPT and Google’s Gemini are driving similar changes in our professional and personal work. Unlike calculators, chatbots can produce incorrect or fabricated responses, known as hallucinations.

When it is hard to verify the veracity of a chatbot response and response veracity is crucial, users would engage in the authenticated mode of chatbot work. A lot of legal, journalism, academic and health care work should not blindly use chatbot-produced content because of the harmful consequences that come from producing and sharing hallucinations.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 5. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines