File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Rethinking the pruning criteria for convolutional neural network
Title | Rethinking the pruning criteria for convolutional neural network |
---|---|
Authors | |
Keywords | Deep learning |
Issue Date | 2021 |
Publisher | Neural Information Processing Systems Foundation. |
Citation | 35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), Sydney, Australia, December 6-14, 2021 How to Cite? |
Abstract | Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters. From our comprehensive experiments, we found two blind spots of pruning criteria: (1) Similarity: There are some strong similarities among several primary pruning criteria that are widely cited and compared. According to these criteria, the ranks of filters’ Importance Score are almost identical, resulting in similar pruned structures. (2) Applicability: The filters' Importance Score measured by some pruning criteria are too close to distinguish the network redundancy well. In this paper, we analyze the above blind spots on different types of pruning criteria with layer-wise pruning or global pruning. We also break some stereotypes, such as that the results of ℓ1 and ℓ2 pruning are not always similar. These analyses are based on the empirical experiments and our assumption (Convolutional Weight Distribution Assumption) that the well-trained convolutional filters in each layer approximately follow a Gaussian-alike distribution. This assumption has been verified through systematic and extensive statistical tests. |
Persistent Identifier | http://hdl.handle.net/10722/315618 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Huang, Z | - |
dc.contributor.author | Shao, W | - |
dc.contributor.author | Wang, X | - |
dc.contributor.author | Lin, L | - |
dc.contributor.author | Luo, P | - |
dc.date.accessioned | 2022-08-19T09:01:15Z | - |
dc.date.available | 2022-08-19T09:01:15Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | 35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), Sydney, Australia, December 6-14, 2021 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315618 | - |
dc.description.abstract | Channel pruning is a popular technique for compressing convolutional neural networks (CNNs), where various pruning criteria have been proposed to remove the redundant filters. From our comprehensive experiments, we found two blind spots of pruning criteria: (1) Similarity: There are some strong similarities among several primary pruning criteria that are widely cited and compared. According to these criteria, the ranks of filters’ Importance Score are almost identical, resulting in similar pruned structures. (2) Applicability: The filters' Importance Score measured by some pruning criteria are too close to distinguish the network redundancy well. In this paper, we analyze the above blind spots on different types of pruning criteria with layer-wise pruning or global pruning. We also break some stereotypes, such as that the results of ℓ1 and ℓ2 pruning are not always similar. These analyses are based on the empirical experiments and our assumption (Convolutional Weight Distribution Assumption) that the well-trained convolutional filters in each layer approximately follow a Gaussian-alike distribution. This assumption has been verified through systematic and extensive statistical tests. | - |
dc.language | eng | - |
dc.publisher | Neural Information Processing Systems Foundation. | - |
dc.relation.ispartof | Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021) | - |
dc.subject | Deep learning | - |
dc.title | Rethinking the pruning criteria for convolutional neural network | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.hkuros | 335594 | - |
dc.publisher.place | United States | - |