File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Diffusion models in text generation: a survey

TitleDiffusion models in text generation: a survey
Authors
KeywordsDiffusion models
Natural language generation
Text generation
Issue Date23-Feb-2024
PublisherPeerJ
Citation
PeerJ Computer Science, 2024, v. 10 How to Cite?
AbstractDiffusion models are a kind of math-based model that were first applied to image generation. Recently, they have drawn wide interest in natural language generation (NLG), a sub-field of natural language processing (NLP), due to their capability to generate varied and high-quality text outputs. In this article, we conduct a comprehensive survey on the application of diffusion models in text generation. We divide text generation into three parts (conditional, unconstrained, and multi-mode text generation, respectively) and provide a detailed introduction. In addition, considering that autoregressive-based pre-training models (PLMs) have recently dominated text generation, we conduct a detailed comparison between diffusion models and PLMs in multiple dimensions, highlighting their respective advantages and limitations. We believe that integrating PLMs into diffusion is a valuable research avenue. We also discuss current challenges faced by diffusion models in text generation and propose potential future research directions, such as improving sampling speed to address scalability issues and exploring multi-modal text generation. By providing a comprehensive analysis and outlook, this survey will serve as a valuable reference for researchers and practitioners interested in utilizing diffusion models for text generation tasks.
Persistent Identifierhttp://hdl.handle.net/10722/348501
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYi, Qiuhua-
dc.contributor.authorChen, Xiangfan-
dc.contributor.authorZhang, Chenwei-
dc.contributor.authorZhou, Zehai-
dc.contributor.authorZhu, Linan-
dc.contributor.authorKong, Xiangjie-
dc.date.accessioned2024-10-10T00:31:08Z-
dc.date.available2024-10-10T00:31:08Z-
dc.date.issued2024-02-23-
dc.identifier.citationPeerJ Computer Science, 2024, v. 10-
dc.identifier.urihttp://hdl.handle.net/10722/348501-
dc.description.abstractDiffusion models are a kind of math-based model that were first applied to image generation. Recently, they have drawn wide interest in natural language generation (NLG), a sub-field of natural language processing (NLP), due to their capability to generate varied and high-quality text outputs. In this article, we conduct a comprehensive survey on the application of diffusion models in text generation. We divide text generation into three parts (conditional, unconstrained, and multi-mode text generation, respectively) and provide a detailed introduction. In addition, considering that autoregressive-based pre-training models (PLMs) have recently dominated text generation, we conduct a detailed comparison between diffusion models and PLMs in multiple dimensions, highlighting their respective advantages and limitations. We believe that integrating PLMs into diffusion is a valuable research avenue. We also discuss current challenges faced by diffusion models in text generation and propose potential future research directions, such as improving sampling speed to address scalability issues and exploring multi-modal text generation. By providing a comprehensive analysis and outlook, this survey will serve as a valuable reference for researchers and practitioners interested in utilizing diffusion models for text generation tasks.-
dc.languageeng-
dc.publisherPeerJ-
dc.relation.ispartofPeerJ Computer Science-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDiffusion models-
dc.subjectNatural language generation-
dc.subjectText generation-
dc.titleDiffusion models in text generation: a survey-
dc.typeArticle-
dc.identifier.doi10.7717/peerj-cs.1905-
dc.identifier.scopuseid_2-s2.0-85186850951-
dc.identifier.volume10-
dc.identifier.eissn2376-5992-
dc.identifier.isiWOS:001174146700002-
dc.identifier.issnl2376-5992-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats