File Download
  Links for fulltext
     (May Require Subscription)

Article: ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)

TitleChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)
Authors
Issue Date29-Aug-2023
PublisherPublic Library of Science
Citation
PLoS ONE, 2023, v. 18, n. 8, p. e0290691 How to Cite?
Abstract

Introduction: Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.

Methods: 50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.

Results: The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.

Conclusion: ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.


Persistent Identifierhttp://hdl.handle.net/10722/331619
ISSN
2021 Impact Factor: 3.752
2020 SCImago Journal Rankings: 0.990

 

DC FieldValueLanguage
dc.contributor.authorCheung, Billy Ho Hung-
dc.contributor.authorLau, Gary Kui Kai-
dc.contributor.authorWong, Gordon Tin Chun-
dc.contributor.authorLee, Elaine Yuen Phin-
dc.contributor.authorKulkarni, Dhananjay-
dc.contributor.authorSeow, Choon Sheong-
dc.contributor.authorWong, Ruby-
dc.contributor.authorCo, Michael Tiong-Hong-
dc.date.accessioned2023-09-21T06:57:25Z-
dc.date.available2023-09-21T06:57:25Z-
dc.date.issued2023-08-29-
dc.identifier.citationPLoS ONE, 2023, v. 18, n. 8, p. e0290691-
dc.identifier.issn1932-6203-
dc.identifier.urihttp://hdl.handle.net/10722/331619-
dc.description.abstract<p><strong>Introduction: </strong>Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.</p><p><strong>Methods: </strong>50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.</p><p><strong>Results: </strong>The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.</p><p><strong>Conclusion: </strong>ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.</p>-
dc.languageeng-
dc.publisherPublic Library of Science-
dc.relation.ispartofPLoS ONE-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1371/journal.pone.0290691-
dc.identifier.scopuseid_2-s2.0-85168979658-
dc.identifier.volume18-
dc.identifier.issue8-
dc.identifier.spagee0290691-
dc.identifier.eissn1932-6203-
dc.identifier.issnl1932-6203-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats