AI Chatbots—With Careful Guardrails—Can Help Treat Your Depression And Anxiety

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 90 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 39%
  • Publisher: 59%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

I am a medical doctor and venture investor in healthcare, biotech, and agriculture with more than 20 years of experience. I am a strong believer that scientific breakthroughs can help us overcome some of the biggest challenges humanity is facing in healthcare and agriculture to cure and prevent chronic disease and to feed an ever growing world population in a sustainable way – in short: science for a better life. In 2016, I joined Bayer to help start Leaps by Bayer, the impact investment unit fo

Brian Chandler, a man in his 20s who works for a bank in Atlanta, suffered from severe anxiety at the start of the pandemic in 2020. While looking for support, he came across Woebot for Adults, a chatbot developed with a team of mental health professionals to provide users with supportive guidance via a series of pre-scripted messages that are engaging, witty, and empathic.

That’s where apps like Woebot come in, which my team at Leaps has invested in. To date, around 1.5 million people have interacted with Woebot since its debut in 2017, and randomized, controlledhave demonstrated its ability to reduce symptoms of anxiety and depression across people of varying demographics. Interestingly, 75 percent of users’ messages happen outside of business hours, when people can’t traditionally access a therapist.

Woebot Health, for instance, is now studying how it can incorporate generative AI within the confines of research that has received approval from an institutional review board. Using it to understand users’ free text for a more personalized performance is a different and potentially easier application than using it to respond to users directly. That’s where extra caution, guardrails, and human oversight are needed.

“The dangers are higher with LLMs to independently administer psychotherapy than the benefits right now,” he said, adding that the tools are evolving so fast, “you basically cannot publish a peer-reviewed paper.” He began a systematic review of AI tools for mental health a year and a half ago, before ChatGPT came out. The latest iteration, GPT-4, is even more advanced in its capacity to receive and generate outputs.

“If you try Pi today, it’s very difficult if not impossible to cause it to say something that is in any way homophobic, judgmental, racist, it doesn’t engage in any conspiracy theories, none of the prompt hacks work,” Suleyman said to audience applause. “It shows that if you’re very intentional and deliberate and start from first principles, trying to create a very boundaried and safe AI, it is possible.”Where the future will take us with generative AI is still anyone’s guess.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines