File Download
Supplementary

Article: KnowLab_AIMed at MEDIQA-CORR 2024: Chain-of-Though (CoT) Prompting Strategies for Medical Error Detection and Correction

TitleKnowLab_AIMed at MEDIQA-CORR 2024: Chain-of-Though (CoT) Prompting Strategies for Medical Error Detection and Correction
Authors
Issue Date23-Apr-2024
Citation
Clinical Natural Language Processing, 2024 How to Cite?
Abstract

This paper describes our submission to the MEDIQA-CORR 2024 shared task for automatically detecting and correcting medical errors in clinical notes. We report results for three methods of few-shot In-Context Learning (ICL) augmented with Chain-of-Thought (CoT) and reason prompts using a large language model (LLM). In the first method, we manually analyse a subset of train and validation dataset to infer three CoT prompts by examining error types in the clinical notes. In the second method, we utilise the training dataset to prompt the LLM to deduce reasons about their correctness or incorrectness. The constructed CoTs and reasons are then augmented with ICL examples to solve the tasks of error detection, span identification, and error correction. Finally, we combine the two methods using a rule-based ensemble method. Across the three sub-tasks, our ensemble method achieves a ranking of 3rd for both sub-task 1 and 2, while securing 7th place in sub-task 3 among all submissions.


Persistent Identifierhttp://hdl.handle.net/10722/344233

 

DC FieldValueLanguage
dc.contributor.authorWu, Zhaolong-
dc.contributor.authorHasan, Abul-
dc.contributor.authorWu, Jinge-
dc.contributor.authorKim, Yunsoo-
dc.contributor.authorCheung, Jason PY-
dc.contributor.authorZhang, Teng-
dc.contributor.authorWu, Honghan-
dc.date.accessioned2024-07-16T03:41:51Z-
dc.date.available2024-07-16T03:41:51Z-
dc.date.issued2024-04-23-
dc.identifier.citationClinical Natural Language Processing, 2024-
dc.identifier.urihttp://hdl.handle.net/10722/344233-
dc.description.abstract<p>This paper describes our submission to the MEDIQA-CORR 2024 shared task for automatically detecting and correcting medical errors in clinical notes. We report results for three methods of few-shot In-Context Learning (ICL) augmented with Chain-of-Thought (CoT) and reason prompts using a large language model (LLM). In the first method, we manually analyse a subset of train and validation dataset to infer three CoT prompts by examining error types in the clinical notes. In the second method, we utilise the training dataset to prompt the LLM to deduce reasons about their correctness or incorrectness. The constructed CoTs and reasons are then augmented with ICL examples to solve the tasks of error detection, span identification, and error correction. Finally, we combine the two methods using a rule-based ensemble method. Across the three sub-tasks, our ensemble method achieves a ranking of 3rd for both sub-task 1 and 2, while securing 7th place in sub-task 3 among all submissions.<br></p>-
dc.languageeng-
dc.relation.ispartofClinical Natural Language Processing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleKnowLab_AIMed at MEDIQA-CORR 2024: Chain-of-Though (CoT) Prompting Strategies for Medical Error Detection and Correction-
dc.typeArticle-
dc.description.naturepublished_or_final_version-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats