Comparative Performance of ChatGPT 3.5 and GPT4 on Rhinology Standardized Board Examination Questions.

Clicks: 51
ID: 279330
2024
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Advances in deep learning and artificial intelligence (AI) have led to the emergence of large language models (LLM) like ChatGPT from OpenAI. The study aimed to evaluate the performance of ChatGPT 3.5 and GPT4 on Otolaryngology (Rhinology) Standardized Board Examination questions in comparison to Otolaryngology residents.This study selected all 127 rhinology standardized questions from www.boardvitals.com, a commonly used study tool by otolaryngology residents preparing for board exams. Ninety-three text-based questions were administered to ChatGPT 3.5 and GPT4, and their answers were compared with the average results of the question bank (used primarily by otolaryngology residents). Thirty-four image-based questions were provided to GPT4 and underwent the same analysis. Based on the findings of an earlier study, a pass-fail cutoff was set at the 10th percentile.On text-based questions, ChatGPT 3.5 answered correctly 45.2% of the time (8th percentile) ( = .0001), while GPT4 achieved 86.0% (66th percentile) ( = .001). GPT4 answered image-based questions correctly 64.7% of the time. Projections suggest that ChatGPT 3.5 might not pass the American Board of Otolaryngology Written Question Exam (ABOto WQE), whereas GPT4 stands a strong chance of passing.The older LLM, ChatGPT 3.5, is unlikely to pass the ABOto WQE. However, the advanced GPT4 model exhibits a much higher likelihood of success. This rapid progression in AI indicates its potential future role in otolaryngology education.As AI technology rapidly advances, it may be that AI-assisted medical education, diagnosis, and treatment planning become commonplace in the medical and surgical landscape.Level 5.
Reference Key
patel2024comparativeoto Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Patel, Evan A;Fleischer, Lindsay;Filip, Peter;Eggerstedt, Michael;Hutz, Michael;Michaelides, Elias;Batra, Pete S;Tajudeen, Bobby A;
Journal OTO open
Year 2024
DOI
10.1002/oto2.164
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.