File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Incorporating copying mechanism in sequence-to-sequence learning

TitleIncorporating copying mechanism in sequence-to-sequence learning
Authors
Issue Date2016
PublisherAssociation for Computational Linguistics.
Citation
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), Berlin, Germany, 7-12 August 2016, v. 1, p. 1631–1640 How to Cite?
AbstractWe address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in human language communication. For example, humans tend to repeat entity names or even long phrases in conversation. The challenge with regard to copying in Seq2Seq is that new machinery is needed to decide when to perform the operation. In this paper, we incorporate copying into neural networkbased Seq2Seq learning and propose a new model called COPYNET with encoderdecoder structure. COPYNET can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose subsequences in the input sequence and put them at proper places in the output sequence. Our empirical study on both synthetic data sets and real world data sets demonstrates the efficacy of COPYNET. For example, COPYNET can outperform regular RNN-based model with remarkable margins on text summarization tasks.
DescriptionSession 6A: Machine learning
Persistent Identifierhttp://hdl.handle.net/10722/247770
ISBN

 

DC FieldValueLanguage
dc.contributor.authorGu, J-
dc.contributor.authorLu, Z-
dc.contributor.authorLi, H-
dc.contributor.authorLi, VOK-
dc.date.accessioned2017-10-18T08:32:23Z-
dc.date.available2017-10-18T08:32:23Z-
dc.date.issued2016-
dc.identifier.citationProceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), Berlin, Germany, 7-12 August 2016, v. 1, p. 1631–1640-
dc.identifier.isbn978-1-945626-00-5-
dc.identifier.urihttp://hdl.handle.net/10722/247770-
dc.descriptionSession 6A: Machine learning-
dc.description.abstractWe address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in human language communication. For example, humans tend to repeat entity names or even long phrases in conversation. The challenge with regard to copying in Seq2Seq is that new machinery is needed to decide when to perform the operation. In this paper, we incorporate copying into neural networkbased Seq2Seq learning and propose a new model called COPYNET with encoderdecoder structure. COPYNET can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose subsequences in the input sequence and put them at proper places in the output sequence. Our empirical study on both synthetic data sets and real world data sets demonstrates the efficacy of COPYNET. For example, COPYNET can outperform regular RNN-based model with remarkable margins on text summarization tasks.-
dc.languageeng-
dc.publisherAssociation for Computational Linguistics.-
dc.relation.ispartofAnnual Meeting of the Association for Computational Linguistics (ACL), 2016-
dc.titleIncorporating copying mechanism in sequence-to-sequence learning-
dc.typeConference_Paper-
dc.identifier.emailGu, J: jiataogu@eee.hku.hk-
dc.identifier.emailLi, VOK: vli@eee.hku.hk-
dc.identifier.authorityLi, VOK=rp00150-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.doi10.18653/v1/P16-1154-
dc.identifier.scopuseid_2-s2.0-85011899840-
dc.identifier.hkuros279685-
dc.identifier.volume1-
dc.identifier.spage1631-
dc.identifier.epage1640-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats