File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Pretraining the noisy channel model for task-oriented dialogue

TitlePretraining the noisy channel model for task-oriented dialogue
Authors
Issue Date2021
Citation
Transactions of the Association for Computational Linguistics, 2021, v. 9, p. 657-674 How to Cite?
AbstractDirect decoding for task-oriented dialogue is known to suffer from the explaining-away effect, manifested in models that prefer short and generic responses. Here we argue for the use of Bayes’ theorem to factorize the dialogue task into two models, the distribution of the context given the response, and the prior for the response itself. This approach, an instan-tiation of the noisy channel model, both mitigates the explaining-away effect and allows the principled incorporation of large pretrained models for the response prior. We present extensive experiments showing that a noisy channel model decodes better responses compared to direct decoding and that a two-stage pre-training strategy, employing both open-domain and task-oriented dialogue data, improves over randomly initialized models.
Persistent Identifierhttp://hdl.handle.net/10722/321967
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Qi-
dc.contributor.authorYu, Lei-
dc.contributor.authorRimell, Laura-
dc.contributor.authorBlunsom, Phil-
dc.date.accessioned2022-11-03T02:22:41Z-
dc.date.available2022-11-03T02:22:41Z-
dc.date.issued2021-
dc.identifier.citationTransactions of the Association for Computational Linguistics, 2021, v. 9, p. 657-674-
dc.identifier.urihttp://hdl.handle.net/10722/321967-
dc.description.abstractDirect decoding for task-oriented dialogue is known to suffer from the explaining-away effect, manifested in models that prefer short and generic responses. Here we argue for the use of Bayes’ theorem to factorize the dialogue task into two models, the distribution of the context given the response, and the prior for the response itself. This approach, an instan-tiation of the noisy channel model, both mitigates the explaining-away effect and allows the principled incorporation of large pretrained models for the response prior. We present extensive experiments showing that a noisy channel model decodes better responses compared to direct decoding and that a two-stage pre-training strategy, employing both open-domain and task-oriented dialogue data, improves over randomly initialized models.-
dc.languageeng-
dc.relation.ispartofTransactions of the Association for Computational Linguistics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titlePretraining the noisy channel model for task-oriented dialogue-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1162/tacl_a_00390-
dc.identifier.scopuseid_2-s2.0-85117644184-
dc.identifier.volume9-
dc.identifier.spage657-
dc.identifier.epage674-
dc.identifier.eissn2307-387X-
dc.identifier.isiWOS:000751952200040-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats