File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Performance of Generative Artificial Intelligence in Dental Licensing Examinations

TitlePerformance of Generative Artificial Intelligence in Dental Licensing Examinations
Authors
KeywordsArtificial intelligence
Communication
Dental education
Digital technology
Examination questions
Specialties, Dental
Issue Date19-Jan-2024
PublisherElsevier
Citation
International Dental Journal, 2024 How to Cite?
Abstract

Objectives

Generative artificial intelligence (GenAI), including large language models (LLMs), has vast potential applications in health care and education. However, it is unclear how proficient LLMs are in interpreting written input and providing accurate answers in dentistry. This study aims to investigate the accuracy of GenAI in answering questions from dental licensing examinations.

Methods

A total of 1461 multiple-choice questions from question books for the US and the UK dental licensing examinations were input into 2 versions of ChatGPT 3.5 and 4.0. The passing rates of the US and UK dental examinations were 75.0% and 50.0%, respectively. The performance of the 2 versions of GenAI in individual examinations and dental subjects was analysed and compared.

Results

ChatGPT 3.5 correctly answered 68.3% (n = 509) and 43.3% (n = 296) of questions from the US and UK dental licensing examinations, respectively. The scores for ChatGPT 4.0 were 80.7% (n = 601) and 62.7% (n = 429), respectively. ChatGPT 4.0 passed both written dental licensing examinations, whilst ChatGPT 3.5 failed. ChatGPT 4.0 answered 327 more questions correctly and 102 incorrectly compared to ChatGPT 3.5 when comparing the 2 versions.

Conclusions

The newer version of GenAI has shown good proficiency in answering multiple-choice questions from dental licensing examinations. Whilst the more recent version of GenAI generally performed better, this observation may not hold true in all scenarios, and further improvements are necessary. The use of GenAI in dentistry will have significant implications for dentist–patient communication and the training of dental professionals.


Persistent Identifierhttp://hdl.handle.net/10722/339700
ISSN
2023 Impact Factor: 3.2
2023 SCImago Journal Rankings: 0.803

 

DC FieldValueLanguage
dc.contributor.authorChau, Reinhard Chun Wang-
dc.contributor.authorThu, Khaing Myat-
dc.contributor.authorYu, Ollie Yiru-
dc.contributor.authorHsung, Richard Tai-Chiu-
dc.contributor.authorLo, Edward Chin Man-
dc.contributor.authorLam, Walter Yu Hang-
dc.date.accessioned2024-03-11T10:38:41Z-
dc.date.available2024-03-11T10:38:41Z-
dc.date.issued2024-01-19-
dc.identifier.citationInternational Dental Journal, 2024-
dc.identifier.issn0020-6539-
dc.identifier.urihttp://hdl.handle.net/10722/339700-
dc.description.abstract<h3>Objectives</h3><p>Generative artificial intelligence (GenAI), including large language models (LLMs), has vast potential applications in health care and education. However, it is unclear how proficient LLMs are in interpreting written input and providing accurate answers in dentistry. This study aims to investigate the accuracy of GenAI in answering questions from dental licensing examinations.</p><h3>Methods</h3><p>A total of 1461 multiple-choice questions from question books for the US and the UK dental licensing examinations were input into 2 versions of ChatGPT 3.5 and 4.0. The passing rates of the US and UK dental examinations were 75.0% and 50.0%, respectively. The performance of the 2 versions of GenAI in individual examinations and dental subjects was analysed and compared.</p><h3>Results</h3><p>ChatGPT 3.5 correctly answered 68.3% (n = 509) and 43.3% (n = 296) of questions from the US and UK dental licensing examinations, respectively. The scores for ChatGPT 4.0 were 80.7% (n = 601) and 62.7% (n = 429), respectively. ChatGPT 4.0 passed both written dental licensing examinations, whilst ChatGPT 3.5 failed. ChatGPT 4.0 answered 327 more questions correctly and 102 incorrectly compared to ChatGPT 3.5 when comparing the 2 versions.</p><h3>Conclusions</h3><p>The newer version of GenAI has shown good proficiency in answering multiple-choice questions from dental licensing examinations. Whilst the more recent version of GenAI generally performed better, this observation may not hold true in all scenarios, and further improvements are necessary. The use of GenAI in dentistry will have significant implications for dentist–patient communication and the training of dental professionals.</p>-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofInternational Dental Journal-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectArtificial intelligence-
dc.subjectCommunication-
dc.subjectDental education-
dc.subjectDigital technology-
dc.subjectExamination questions-
dc.subjectSpecialties, Dental-
dc.titlePerformance of Generative Artificial Intelligence in Dental Licensing Examinations-
dc.typeArticle-
dc.identifier.doi10.1016/j.identj.2023.12.007-
dc.identifier.scopuseid_2-s2.0-85183133411-
dc.identifier.eissn1875-595X-
dc.identifier.issnl0020-6539-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats