File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models

TitleDiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
Authors
Issue Date1-May-2023
Abstract

Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks. Code is available at https://github.com/Shark-NLP/DiffuSeq


Persistent Identifierhttp://hdl.handle.net/10722/333819

 

DC FieldValueLanguage
dc.contributor.authorGong, Shansan-
dc.contributor.authorLi, Mukai-
dc.contributor.authorFeng, Jiangtao-
dc.contributor.authorWu, Zhiyong-
dc.contributor.authorKong, Lingpeng-
dc.date.accessioned2023-10-06T08:39:20Z-
dc.date.available2023-10-06T08:39:20Z-
dc.date.issued2023-05-01-
dc.identifier.urihttp://hdl.handle.net/10722/333819-
dc.description.abstract<p>Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks. Code is available at <a href="https://github.com/Shark-NLP/DiffuSeq">https://github.com/Shark-NLP/DiffuSeq</a><br></p>-
dc.languageeng-
dc.relation.ispartofInternational Conference on Learning Representations (ICLR 2023) (01/05/2023-05/05/2023, Kigali, Rwanda)-
dc.titleDiffuSeq: Sequence to Sequence Text Generation with Diffusion Models-
dc.typeConference_Paper-
dc.identifier.doi10.48550/arXiv.2210.08933-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats