Artificial Intelligence in Medical Education: Exploring ChatGPT’s Potencial as a Learning Tool in Ophthalmology
DOI:
https://doi.org/10.48560/rspo.33232Keywords:
Artificial Intelligence, Graduate Medical Education, OphthalmologyAbstract
INTRODUCTION: The recent introduction of large language models (LLMs) based on artificial intelligence (AI), the most popular of which is ChatGPT, has sparked interest in their application in Ophthalmology. The aim of this study is to investigate the contribution of ChatGPT as a learning tool during Ophthalmology residency.MATERIAL AND METHODS: ChatGPT 3.5 (OpenAI, United States) was used to simulate an Ophthalmology exam, consisting of 260 multiple-choice questions, distributed among the 13 knowledge areas of the American Academy of Ophthalmology’s Basic and Clinical Science Course 2022-2023. Specialists from Centro Hospitalar de Lisboa Ocidental rated the justification provided by ChatGPT: 1 (very poor), 2 (poor), 3 (satisfactory), 4 (good) and 5 (very good). We tried to complement the model’s accuracy (number of correct questions in the direct experiment E1 and/or prompted E2) with precision (inter-response consistency over 3 repetitions), and to examine the quality of the justifications, as innovative elements.
RESULTS: ChatGPT accuracy was 63.1%, rising to 64.2% in the prompted test. In both tests, the best performance was recorded in fundamentals of ophthalmology (75.0% and 76.7%) and the worst in oculoplastics and orbit (46.7% and 55.0%), optics and rehabilitation and uveitis and inflammation (55.0%% and 53.3%; 55.0% and 53.3%). There was no statistically significant difference between the format of the question or the domain assessed and the model’s performance. ChatGPT gave the correct answers all three times in 69.3% of cases, twice in 17.2% and only once in 13.5% of situations. In 94.7% of the cases, the justifications were considered at least acceptable (≥3), of which 47.4% achieved the maximum score.
CONCLUSION: Our results confirm those described in the literature on LLMs and ophthalmic question banks. The probabilistic nature, the lack of specific training in Ophthalmology, and the inability to ensure the state of the art and to process images were the main limitations identified. Albeit recognizing ChatGPT’s competence to provide the right answers on different topics in Ophthalmology, we believe it is too early to recommend it as suitable learning tool for residents. However, it already demonstrates the accuracy, precision and scientific quality needed to possibly become a tool, especially if specifically trained in Ophthalmology in the future.
Downloads
References
Rosa AM. Inteligência Artificial em Oftalmologia. Oftalmologia. 2023;47:75–6. doi:10.48560/rspo.31467.
Ting DSJ, Tan TF, Ting DSW. ChatGPT in ophthalmology: the dawn of a new era? Eye 2024;38:4-7. doi: 10.1038/s41433-023-02619-4.
Lyons RJ, Arepalli SR, Fromal O, Choi JD, Jain N. Artificial intelligence chatbot performance in triage of ophthalmic conditions. Can J Ophthalmol. 2023;S0008-4182(23)00234-X. doi:10.1016/j.jcjo.2023.07.016.
Singh S, Djalilian A, Ali MJ. ChatGPT and ophthalmology: exploring its potential with discharge summaries and operative notes. Semin Ophthalmol. 2023;38:503-7. doi:10.1080/08820538.2023.2209166
Antaki F, Touma S, Milad D, El-Khoury J, Duval R. Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of Its Successes and Shortcomings. Ophthalmol Sci. 2023;3:100324. doi:10.1016/j.xops.2023.100324.
Raimondi R, Tzoumas N, Salisbury T, Di Simplicio S, Romano MR, North East Trainee Research in Ophthalmology Network (NETRiON). Comparative analysis of large language models in the Royal College of Ophthalmologists fellowship exams. Eye. 2023;37:3530-3. doi: 10.1038/s41433-023-02563-3.
Mihalache A, Popovic MM, Muni RH. Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment. JAMA Ophthalmol. 2021;141:589–97. doi: 10.1001/jamaophthalmol.2023.1144.
Cai LZ, Shaheen A, Jin A, Fukui R, Yi JS, Yannuzzi N et al. Performance of Generative Large Language Models on Ophthalmology Board-Style Questions. Am J Ophthalmol. 2023;254:141–9. Doi:10.1016/j.ajo.2023.05.024.
Chat GPT [accessed from June to August 2023]. Available at: https://chat.openai.com/.
American Academy of Ophthalmology. 2022-2023 Basic and Clinical Science Course, Complete Print Set. San Francisco: AAO; 2022.
J White, Q Fu, S Hays, M Sandborn, C Olea, H Gilbert, et al. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. ArXiv. 2023; Preprint at arXiv:2302.11382v1. doi:10.48550/arXiv.2302.11382.
Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198. doi:10.1371/journal.pdig.0000198.
Jiao W, Wang W, Huang J, Wang X, Tu Z. Is ChatGPT a good translator? ArXiv. 2023; Preprint at arXiv:2301.08745v3. doi:10.48550/arXiv.2301.08745.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Revista Sociedade Portuguesa de Oftalmologia
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Do not forget to download the Authorship responsibility statement/Authorization for Publication and Conflict of Interest.
The article can only be submitted with these two documents.
To obtain the Authorship responsibility statement/Authorization for Publication file, click here.
To obtain the Conflict of Interest file (ICMJE template), click here