File Download

There are no files associated with this item.

Supplementary

Conference Paper: Non-autoregressive neural machine translation

TitleNon-autoregressive neural machine translation
Authors
KeywordsMachine translation
Non-autoregressive
Transformer
Fertility
NMT
Issue Date2018
Citation
6th International Conference on Learning Representations (ICLR), Vancouver, Canada, 30 April - 3 May 2018 How to Cite?
AbstractExisting approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English–Romanian.
Persistent Identifierhttp://hdl.handle.net/10722/261953

 

DC FieldValueLanguage
dc.contributor.authorGu, J-
dc.contributor.authorBradbury, J-
dc.contributor.authorXiong, C-
dc.contributor.authorLi, VOK-
dc.contributor.authorSocher, R-
dc.date.accessioned2018-09-28T04:50:53Z-
dc.date.available2018-09-28T04:50:53Z-
dc.date.issued2018-
dc.identifier.citation6th International Conference on Learning Representations (ICLR), Vancouver, Canada, 30 April - 3 May 2018-
dc.identifier.urihttp://hdl.handle.net/10722/261953-
dc.description.abstractExisting approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English–Romanian.-
dc.languageeng-
dc.relation.ispartofInternational Conference on Learning Representations (ICLR)-
dc.subjectMachine translation-
dc.subjectNon-autoregressive-
dc.subjectTransformer-
dc.subjectFertility-
dc.subjectNMT-
dc.titleNon-autoregressive neural machine translation-
dc.typeConference_Paper-
dc.identifier.emailLi, VOK: vli@eee.hku.hk-
dc.identifier.authorityLi, VOK=rp00150-
dc.identifier.hkuros292171-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats