File Download

There are no files associated with this item.

Conference Paper: Compression of generative pre-trained language models via quantization

TitleCompression of generative pre-trained language models via quantization
Authors
Issue Date2022
PublisherAbbey Group.
Citation
The 60th Annual Meeting of the Association for Computational Linguistics (ACL), Dublin, Ireland & Online, 22-27 May, 2022 How to Cite?
AbstractThe increasing size of generative Pre-trained Language Models (PLMs) has greatly increased the demand for model compression. Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear. In this paper, we compress generative PLMs by quantization. We find that previous quantization methods fail on generative tasks due to the extit{homogeneous word embeddings} caused by reduced capacity, and extit{varied distribution of weights}. Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules. Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin. With comparable performance with the full-precision models, we achieve 14.4x and 13.4x compression rates on GPT-2 and BART, respectively.
DescriptionOutstanding Paper Award
Persistent Identifierhttp://hdl.handle.net/10722/315547

 

DC FieldValueLanguage
dc.contributor.authorTao, C-
dc.contributor.authorHou, L-
dc.contributor.authorZhang, W-
dc.contributor.authorShang, L-
dc.contributor.authorJiang, X-
dc.contributor.authorLiu, Q-
dc.contributor.authorLuo, P-
dc.contributor.authorWong, N-
dc.date.accessioned2022-08-19T08:59:55Z-
dc.date.available2022-08-19T08:59:55Z-
dc.date.issued2022-
dc.identifier.citationThe 60th Annual Meeting of the Association for Computational Linguistics (ACL), Dublin, Ireland & Online, 22-27 May, 2022-
dc.identifier.urihttp://hdl.handle.net/10722/315547-
dc.descriptionOutstanding Paper Award-
dc.description.abstractThe increasing size of generative Pre-trained Language Models (PLMs) has greatly increased the demand for model compression. Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear. In this paper, we compress generative PLMs by quantization. We find that previous quantization methods fail on generative tasks due to the extit{homogeneous word embeddings} caused by reduced capacity, and extit{varied distribution of weights}. Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules. Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin. With comparable performance with the full-precision models, we achieve 14.4x and 13.4x compression rates on GPT-2 and BART, respectively.-
dc.languageeng-
dc.publisherAbbey Group.-
dc.relation.ispartofThe 60th Annual Conference of the Association for Computational Linguistics (ACL), Outstanding Paper Award-
dc.titleCompression of generative pre-trained language models via quantization-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.emailWong, N: nwong@eee.hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.authorityWong, N=rp00190-
dc.identifier.hkuros335577-
dc.publisher.placeIreland-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats