File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Model Compression Using Progressive Channel Pruning

TitleModel Compression Using Progressive Channel Pruning
Authors
Keywordschannel pruning
domain adaptation
Model compression
transfer learning
Issue Date2021
Citation
IEEE Transactions on Circuits and Systems for Video Technology, 2021, v. 31, n. 3, p. 1114-1124 How to Cite?
AbstractIn this work, we propose a simple but effective channel pruning framework called Progressive Channel Pruning (PCP) to accelerate Convolutional Neural Networks (CNNs). In contrast to the existing channel pruning methods that prune the channels only once per layer in a layer-by-layer fashion, our new progressive framework iteratively prunes a small number of channels from several selected layers, which consists of a three-step attempting-selecting-pruning pipeline in each iteration. In the attempting step, we attempt to prune a pre-defined number of channels from one layer by using any existing channel pruning methods and estimate the accuracy drop for this layer based on the labelled samples in the validation set. In the selecting step, based on the estimated accuracy drops for all layers, we propose a greedy strategy to automatically select a set of layers that will lead to less overall accuracy drop after pruning these layers. In the pruning step, we prune a small number of channels from these selected layers. We further extend our PCP framework to prune channels for the deep transfer learning methods like Domain Adversarial Neural Network (DANN), in which we effectively reduce the data distribution mismatch in the channel pruning process by using both labelled samples from the source domain and pseudo-labelled samples from the target domain. Our comprehensive experiments on two benchmark datasets demonstrate that our PCP framework outperforms the existing channel pruning approaches under both supervised learning and transfer learning settings.
Persistent Identifierhttp://hdl.handle.net/10722/321929
ISSN
2023 Impact Factor: 8.3
2023 SCImago Journal Rankings: 2.299
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorGuo, Jinyang-
dc.contributor.authorZhang, Weichen-
dc.contributor.authorOuyang, Wanli-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:22:26Z-
dc.date.available2022-11-03T02:22:26Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, 2021, v. 31, n. 3, p. 1114-1124-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10722/321929-
dc.description.abstractIn this work, we propose a simple but effective channel pruning framework called Progressive Channel Pruning (PCP) to accelerate Convolutional Neural Networks (CNNs). In contrast to the existing channel pruning methods that prune the channels only once per layer in a layer-by-layer fashion, our new progressive framework iteratively prunes a small number of channels from several selected layers, which consists of a three-step attempting-selecting-pruning pipeline in each iteration. In the attempting step, we attempt to prune a pre-defined number of channels from one layer by using any existing channel pruning methods and estimate the accuracy drop for this layer based on the labelled samples in the validation set. In the selecting step, based on the estimated accuracy drops for all layers, we propose a greedy strategy to automatically select a set of layers that will lead to less overall accuracy drop after pruning these layers. In the pruning step, we prune a small number of channels from these selected layers. We further extend our PCP framework to prune channels for the deep transfer learning methods like Domain Adversarial Neural Network (DANN), in which we effectively reduce the data distribution mismatch in the channel pruning process by using both labelled samples from the source domain and pseudo-labelled samples from the target domain. Our comprehensive experiments on two benchmark datasets demonstrate that our PCP framework outperforms the existing channel pruning approaches under both supervised learning and transfer learning settings.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technology-
dc.subjectchannel pruning-
dc.subjectdomain adaptation-
dc.subjectModel compression-
dc.subjecttransfer learning-
dc.titleModel Compression Using Progressive Channel Pruning-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCSVT.2020.2996231-
dc.identifier.scopuseid_2-s2.0-85102305578-
dc.identifier.volume31-
dc.identifier.issue3-
dc.identifier.spage1114-
dc.identifier.epage1124-
dc.identifier.eissn1558-2205-
dc.identifier.isiWOS:000626532100022-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats