File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.2196/55595
- Scopus: eid_2-s2.0-85193952351
- WOS: WOS:001241297100001
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care
| Title | Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care |
|---|---|
| Authors | |
| Keywords | AI artificial intelligence audiologist audiology chatbot ChatGPT educational technology examination health care services healthcare services hearing hearing care hearing specialist information accuracy large language model medical education natural language processing Taiwan |
| Issue Date | 26-Apr-2024 |
| Publisher | JMIR Publications |
| Citation | JMIR Medical Education, 2024, v. 10 How to Cite? |
| Abstract | Background: Artificial intelligence (AI) chatbots, such as ChatGPT-4, have shown immense potential for application across various aspects of medicine, including medical education, clinical practice, and research. Objective: This study aimed to evaluate the performance of ChatGPT-4 in the 2023 Taiwan Audiologist Qualification Examination, thereby preliminarily exploring the potential utility of AI chatbots in the fields of audiology and hearing care services. Methods: ChatGPT-4 was tasked to provide answers and reasoning for the 2023 Taiwan Audiologist Qualification Examination. The examination encompassed six subjects: (1) basic auditory science, (2) behavioral audiology, (3) electrophysiological audiology, (4) principles and practice of hearing devices, (5) health and rehabilitation of the auditory and balance systems, and (6) auditory and speech communication disorders (including professional ethics). Each subject included 50 multiple-choice questions, with the exception of behavioral audiology, which had 49 questions, amounting to a total of 299 questions. Results: The correct answer rates across the 6 subjects were as follows: 88% for basic auditory science, 63% for behavioral audiology, 58% for electrophysiological audiology, 72% for principles and practice of hearing devices, 80% for health and rehabilitation of the auditory and balance systems, and 86% for auditory and speech communication disorders (including professional ethics). The overall accuracy rate for the 299 questions was 75%, which surpasses the examination’s passing criteria of an average 60% accuracy rate across all subjects. A comprehensive review of ChatGPT-4’s responses indicated that incorrect answers were predominantly due to information errors. Conclusions: ChatGPT-4 demonstrated a robust performance in the Taiwan Audiologist Qualification Examination, showcasing effective logical reasoning skills. Our results suggest that with enhanced information accuracy, ChatGPT-4’s performance could be further improved. This study indicates significant potential for the application of AI chatbots in audiology and hearing care services. |
| Persistent Identifier | http://hdl.handle.net/10722/357228 |
| ISSN | 2023 Impact Factor: 3.2 2023 SCImago Journal Rankings: 0.806 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Wang, Shangqiguo | - |
| dc.contributor.author | Mo, Changgeng | - |
| dc.contributor.author | Chen, Yuan | - |
| dc.contributor.author | Dai, Xiaolu | - |
| dc.contributor.author | Wang, Huiyi | - |
| dc.contributor.author | Shen, Xiaoli | - |
| dc.date.accessioned | 2025-06-23T08:54:07Z | - |
| dc.date.available | 2025-06-23T08:54:07Z | - |
| dc.date.issued | 2024-04-26 | - |
| dc.identifier.citation | JMIR Medical Education, 2024, v. 10 | - |
| dc.identifier.issn | 2369-3762 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/357228 | - |
| dc.description.abstract | <p>Background: Artificial intelligence (AI) chatbots, such as ChatGPT-4, have shown immense potential for application across various aspects of medicine, including medical education, clinical practice, and research. Objective: This study aimed to evaluate the performance of ChatGPT-4 in the 2023 Taiwan Audiologist Qualification Examination, thereby preliminarily exploring the potential utility of AI chatbots in the fields of audiology and hearing care services. Methods: ChatGPT-4 was tasked to provide answers and reasoning for the 2023 Taiwan Audiologist Qualification Examination. The examination encompassed six subjects: (1) basic auditory science, (2) behavioral audiology, (3) electrophysiological audiology, (4) principles and practice of hearing devices, (5) health and rehabilitation of the auditory and balance systems, and (6) auditory and speech communication disorders (including professional ethics). Each subject included 50 multiple-choice questions, with the exception of behavioral audiology, which had 49 questions, amounting to a total of 299 questions. Results: The correct answer rates across the 6 subjects were as follows: 88% for basic auditory science, 63% for behavioral audiology, 58% for electrophysiological audiology, 72% for principles and practice of hearing devices, 80% for health and rehabilitation of the auditory and balance systems, and 86% for auditory and speech communication disorders (including professional ethics). The overall accuracy rate for the 299 questions was 75%, which surpasses the examination’s passing criteria of an average 60% accuracy rate across all subjects. A comprehensive review of ChatGPT-4’s responses indicated that incorrect answers were predominantly due to information errors. Conclusions: ChatGPT-4 demonstrated a robust performance in the Taiwan Audiologist Qualification Examination, showcasing effective logical reasoning skills. Our results suggest that with enhanced information accuracy, ChatGPT-4’s performance could be further improved. This study indicates significant potential for the application of AI chatbots in audiology and hearing care services. <br></p> | - |
| dc.language | eng | - |
| dc.publisher | JMIR Publications | - |
| dc.relation.ispartof | JMIR Medical Education | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | AI | - |
| dc.subject | artificial intelligence | - |
| dc.subject | audiologist | - |
| dc.subject | audiology | - |
| dc.subject | chatbot | - |
| dc.subject | ChatGPT | - |
| dc.subject | educational technology | - |
| dc.subject | examination | - |
| dc.subject | health care services | - |
| dc.subject | healthcare services | - |
| dc.subject | hearing | - |
| dc.subject | hearing care | - |
| dc.subject | hearing specialist | - |
| dc.subject | information accuracy | - |
| dc.subject | large language model | - |
| dc.subject | medical education | - |
| dc.subject | natural language processing | - |
| dc.subject | Taiwan | - |
| dc.title | Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care | - |
| dc.type | Article | - |
| dc.description.nature | published_or_final_version | - |
| dc.identifier.doi | 10.2196/55595 | - |
| dc.identifier.scopus | eid_2-s2.0-85193952351 | - |
| dc.identifier.volume | 10 | - |
| dc.identifier.eissn | 2369-3762 | - |
| dc.identifier.isi | WOS:001241297100001 | - |
| dc.identifier.issnl | 2369-3762 | - |
