File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering

TitleSelf-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
Authors
Issue Date1-Jul-2023
Abstract

Despite the surprising few-shot performance of in-context learning (ICL), it is still a common practice to randomly sample examples to serve as context. This paper advocates a new principle for ICL: self-adaptive in-context learning. The self-adaption mechanism is introduced to help each sample find an in-context example organization (i.e., selection and permutation) that can derive the correct prediction, thus maximizing performance. To validate the effectiveness of self-adaptive ICL, we propose a general select-then-rank framework and instantiate it with new selection and ranking algorithms. Upon extensive evaluation on eight different NLP datasets, our self-adaptive ICL method achieves a 40% relative improvement over the common practice setting. Further analysis reveals the enormous potential of self-adaptive ICL that it might be able to close the gap between ICL and finetuning given more advanced algorithms. Our code will be released to facilitate future research.


Persistent Identifierhttp://hdl.handle.net/10722/333769

 

DC FieldValueLanguage
dc.contributor.authorWu, Zhiyong-
dc.contributor.authorWang, Yaoxiang-
dc.contributor.authorYe, Jiacheng-
dc.contributor.authorKong, Lingpeng-
dc.date.accessioned2023-10-06T08:38:55Z-
dc.date.available2023-10-06T08:38:55Z-
dc.date.issued2023-07-01-
dc.identifier.urihttp://hdl.handle.net/10722/333769-
dc.description.abstract<p>Despite the surprising few-shot performance of in-context learning (ICL), it is still a common practice to randomly sample examples to serve as context. This paper advocates a new principle for ICL: self-adaptive in-context learning. The self-adaption mechanism is introduced to help each sample find an in-context example organization (i.e., selection and permutation) that can derive the correct prediction, thus maximizing performance. To validate the effectiveness of self-adaptive ICL, we propose a general select-then-rank framework and instantiate it with new selection and ranking algorithms. Upon extensive evaluation on eight different NLP datasets, our self-adaptive ICL method achieves a 40% relative improvement over the common practice setting. Further analysis reveals the enormous potential of self-adaptive ICL that it might be able to close the gap between ICL and finetuning given more advanced algorithms. Our code will be released to facilitate future research.</p>-
dc.languageeng-
dc.relation.ispartofThe 61st Annual Meeting of the Association for Computational Linguistics (09/07/2023-14/07/2023, Toronto, Canada)-
dc.titleSelf-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering-
dc.typeConference_Paper-
dc.identifier.doi10.18653/v1/2023.acl-long.79-
dc.identifier.volume1-
dc.identifier.spage1423-
dc.identifier.epage1436-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats