File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Gradually Excavating External Knowledge for Implicit Complex Question Answering

TitleGradually Excavating External Knowledge for Implicit Complex Question Answering
Authors
Issue Date1-Dec-2023
PublisherAssociation for Computational Linguistics
Abstract

Recently, large language models (LLMs) have gained much attention for the emergence of human-comparable capabilities and huge potential. However, for open-domain implicit question-answering problems, LLMs may not be the ultimate solution due to the reasons of: 1) uncovered or out-of-date domain knowledge, 2) one-shot generation and hence restricted comprehensiveness. To this end, this work proposes a gradual knowledge excavation framework for open-domain complex question answering, where LLMs iteratively and actively acquire extrinsic information, then reason based on acquired historical knowledge. Specifically, during each step of the solving process, the model selects an action to execute, such as querying external knowledge or performing a single logical reasoning step, to gradually progress toward a final answer. Our method can effectively leverage plug-and-play external knowledge and dynamically adjust the strategy for solving complex questions. Evaluated on the StrategyQA dataset, our method achieves 78.17% accuracy with less than 6% parameters of its competitors, setting new SOTA in the ~10B LLM class.


Persistent Identifierhttp://hdl.handle.net/10722/339494

 

DC FieldValueLanguage
dc.contributor.authorLiu, Chang-
dc.contributor.authorLi, Xiaoguang-
dc.contributor.authorShang, Lifeng-
dc.contributor.authorJiang, Xin-
dc.contributor.authorLiu, Qun-
dc.contributor.authorLam, Edmund-
dc.contributor.authorWong, Ngai-
dc.date.accessioned2024-03-11T10:37:05Z-
dc.date.available2024-03-11T10:37:05Z-
dc.date.issued2023-12-01-
dc.identifier.urihttp://hdl.handle.net/10722/339494-
dc.description.abstract<p>Recently, large language models (LLMs) have gained much attention for the emergence of human-comparable capabilities and huge potential. However, for open-domain implicit question-answering problems, LLMs may not be the ultimate solution due to the reasons of: 1) uncovered or out-of-date domain knowledge, 2) one-shot generation and hence restricted comprehensiveness. To this end, this work proposes a gradual knowledge excavation framework for open-domain complex question answering, where LLMs iteratively and actively acquire extrinsic information, then reason based on acquired historical knowledge. Specifically, during each step of the solving process, the model selects an action to execute, such as querying external knowledge or performing a single logical reasoning step, to gradually progress toward a final answer. Our method can effectively leverage plug-and-play external knowledge and dynamically adjust the strategy for solving complex questions. Evaluated on the StrategyQA dataset, our method achieves 78.17% accuracy with less than 6% parameters of its competitors, setting new SOTA in the ~10B LLM class.<br></p>-
dc.languageeng-
dc.publisherAssociation for Computational Linguistics-
dc.relation.ispartofEMNLP (01/12/2023-08/12/2023, Singapore)-
dc.titleGradually Excavating External Knowledge for Implicit Complex Question Answering-
dc.typeConference_Paper-
dc.identifier.doi10.18653/v1/2023.findings-emnlp.961-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats