published in January, which has not been peer reviewed, that posed ethical teasers to ChatGPT concluded that the chatbot makes for an inconsistent moral adviser that can influence human decisionmaking even when people know that the advice is coming from AI software.
Being a doctor is about much more than regurgitating encyclopedic medical knowledge. While many physicians are enthusiastic about using ChatGPT for low-risk tasks like text summarization, some bioethicists worry that doctors will turn to the bot for advice when they encounter a tough ethical decision like whether surgery is the right choice for a patient with a low likelihood of survival or recovery.
“You can't outsource or automate that kind of process to a generative AI model,” says Jamie Webb, a bioethicist at the Center for Technomoral Futures at the University of Edinburgh. Last year, Webb and a team of moral psychologists explored what it would take to build an AI-powered “moral adviser” for use in medicine, inspired bythat suggested the idea. Webb and his coauthors concluded that it would be tricky for such systems to reliably balance different ethical principles and that doctors and other staff might suffer “moral de-skilling” if they were to grow overly reliant on a bot instead of thinking through tricky decisions themselves.
Webb points out that doctors have been told before that AI that processes language will revolutionize their work, only to be disappointed. Afterwins in 2010 and 2011, the Watson division at IBM turned to oncology and made claims about effectiveness fighting cancer with AI.