By Dr. Sushama R. Chaphalkar, PhD.Apr 16 2024Reviewed by Lily Ramsey, LLM In a recent study published in the journal Eye, researchers from Canada evaluated the performance of two artificial intelligence chatbots, Google Gemini and Bard, in the ophthalmology board examination.
While ChatGPT-3.5's accuracy was up to 64% in steps one and two of the AMBOSS and NBME exams, newer versions like ChatGPT-4 showed improved performance. Related StoriesThe portal provides practice questions for various exams, including the Ophthalmic Knowledge Assessment Program , national board exams such as the American Board of Ophthalmology exam, as well as certain postgraduate exams.
Results and discussion Bard and Gemini responded promptly and consistently to all 150 questions without experiencing high demand. In the primary analysis using the US versions, Bard took 7.1 ± 2.7 seconds to respond, while Gemini responded in 7.1 ± 2.8 seconds, with a longer average response length. With Gemini from Vietnam, 74% of questions were answered correctly, similar to the US version, but there were differences in answer choices for 15% of questions compared to the US version. In both cases, some questions answered incorrectly by the US versions were answered correctly by the Vietnam versions, and vice versa.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: TheRegister - 🏆 67. / 61 Read more »
Source: CreativeBloq - 🏆 40. / 65 Read more »