Researchers say a lot more work is needed, but their findings suggest the technology could one day support doctors working inThe study was by Dr. Hidde ten Berg, from the department of emergency medicine and Dr. Steef Kurstjens, from the department of clinical chemistry and hematology, both at Jeroen Bosch Hospital, 's-Hertogenbosch, The Netherlands.
Dr. ten Berg told the Congress,"Like a lot of people, we have been trying out ChatGPT and we were intrigued to see how well it worked for examining some complex diagnostic cases. So, we set up a study to assess how well the chatbot worked compared to doctors with a collection of emergency medicine cases from daily practice."in 2022.
The researchers entered physicians' notes on patients' signs, symptoms and physical examinations into two versions of ChatGPT . They also provided the chatbot with results of lab tests, such as blood and urine analysis. For each case, they compared the shortlist of likely diagnoses generated by the chatbot to the shortlist made by emergency medicine doctors and to the patient's correct diagnosis.
They found a large overlap between the shortlists generated by ChatGPT and the doctors. Doctors had the correct diagnosis within their top five likely diagnoses in 87% of the cases, compared to 97% for ChatGPT version 3.5 and 87% for version 4.0. Dr. ten Berg said,"We found that ChatGPT performed well in generating a list of likely diagnoses and suggesting the most likely option. We also found a lot of overlap with the doctors' lists of likely diagnoses. Simply put, this indicates that ChatGPT was able suggest medical diagnoses much like a human doctor would.