File Download

There are no files associated with this item.

Supplementary

Conference Paper: An Exploratory Study on Long Dialogue Summarization: What Works and What's Next

TitleAn Exploratory Study on Long Dialogue Summarization: What Works and What's Next
Authors
Issue Date2021
PublisherAssociation for Computational Linguistics.
Citation
The 2021 Conference on Empirical Methods in Natural Language Processing, Findings of the ACL: EMNLP 2021, p. 4426–4433 How to Cite?
AbstractDialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series. However, real-world dialogues pose a great challenge to current summarization models, as the dialogue length typically exceeds the input limits imposed by recent transformer-based pre-trained models, and the interactive nature of dialogues makes relevant information more context-dependent and sparsely distributed than news articles. In this work, we perform a comprehensive study on long dialogue summarization by investigating three strategies to deal with the lengthy input problem and locate relevant information: (1) extended transformer models such as Longformer, (2) retrieve-then-summarize pipeline models with several dialogue utterance retrieval methods, and (3) hierarchical dialogue encoding models such as HMNet. Our experimental results on three long dialogue datasets (QMSum, MediaSum, SummScreen) show that the retrieve-then-summarize pipeline models yield the best performance. We also demonstrate that the summary quality can be further improved with a stronger retrieval model and pretraining on proper external summarization datasets.
Persistent Identifierhttp://hdl.handle.net/10722/319364

 

DC FieldValueLanguage
dc.contributor.authorZhang, Y-
dc.contributor.authorNi, A-
dc.contributor.authorYu, T-
dc.contributor.authorZhang, R-
dc.contributor.authorZhu, C-
dc.contributor.authorDeb, B-
dc.contributor.authorCelikyilmaz, A-
dc.contributor.authorAwadallah, A-
dc.contributor.authorRadev, D-
dc.date.accessioned2022-10-14T05:11:58Z-
dc.date.available2022-10-14T05:11:58Z-
dc.date.issued2021-
dc.identifier.citationThe 2021 Conference on Empirical Methods in Natural Language Processing, Findings of the ACL: EMNLP 2021, p. 4426–4433-
dc.identifier.urihttp://hdl.handle.net/10722/319364-
dc.description.abstractDialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series. However, real-world dialogues pose a great challenge to current summarization models, as the dialogue length typically exceeds the input limits imposed by recent transformer-based pre-trained models, and the interactive nature of dialogues makes relevant information more context-dependent and sparsely distributed than news articles. In this work, we perform a comprehensive study on long dialogue summarization by investigating three strategies to deal with the lengthy input problem and locate relevant information: (1) extended transformer models such as Longformer, (2) retrieve-then-summarize pipeline models with several dialogue utterance retrieval methods, and (3) hierarchical dialogue encoding models such as HMNet. Our experimental results on three long dialogue datasets (QMSum, MediaSum, SummScreen) show that the retrieve-then-summarize pipeline models yield the best performance. We also demonstrate that the summary quality can be further improved with a stronger retrieval model and pretraining on proper external summarization datasets.-
dc.languageeng-
dc.publisherAssociation for Computational Linguistics. -
dc.relation.ispartofThe 2021 Conference on Empirical Methods in Natural Language Processing-
dc.titleAn Exploratory Study on Long Dialogue Summarization: What Works and What's Next-
dc.typeConference_Paper-
dc.identifier.emailYu, T: taoyds@hku.hk-
dc.identifier.authorityYu, T=rp02864-
dc.identifier.hkuros339279-
dc.identifier.volumeFindings of the ACL: EMNLP 2021-
dc.identifier.spage4426–4433-
dc.identifier.epage4426–4433-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats