File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers

TitleDearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Authors
KeywordsDeep learning architectures and techniques
Optimization methods
Issue Date2022
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 12042-12052 How to Cite?
AbstractTransformers are successfully applied to computer vision due to their powerful modeling capacity with self-attention. However, the excellent performance of transformers heavily depends on enormous training images. Thus, a data-efficient transformer solution is urgently needed. In this work, we propose an early knowledge distillation framework, which is termed as DearKD, to improve the data efficiency required by transformers. Our DearKD is a two-stage framework that first distills the inductive biases from the early intermediate layers of a CNN and then gives the transformer full play by training without distillation. Further, our DearKD can be readily applied to the extreme data-free case where no real images are available. In this case, we propose a boundary-preserving intra-divergence loss based on DeepInversion to further close the performance gap against the full-data counterpart. Extensive experiments on ImageNet, partial ImageNet, data-free setting and other downstream tasks prove the superiority of DearKD over its baselines and state-of-the-art methods.
Persistent Identifierhttp://hdl.handle.net/10722/345266
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorChen, Xianing-
dc.contributor.authorCao, Qiong-
dc.contributor.authorZhong, Yujie-
dc.contributor.authorZhang, Jing-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorTao, Dacheng-
dc.date.accessioned2024-08-15T09:26:16Z-
dc.date.available2024-08-15T09:26:16Z-
dc.date.issued2022-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 12042-12052-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/345266-
dc.description.abstractTransformers are successfully applied to computer vision due to their powerful modeling capacity with self-attention. However, the excellent performance of transformers heavily depends on enormous training images. Thus, a data-efficient transformer solution is urgently needed. In this work, we propose an early knowledge distillation framework, which is termed as DearKD, to improve the data efficiency required by transformers. Our DearKD is a two-stage framework that first distills the inductive biases from the early intermediate layers of a CNN and then gives the transformer full play by training without distillation. Further, our DearKD can be readily applied to the extreme data-free case where no real images are available. In this case, we propose a boundary-preserving intra-divergence loss based on DeepInversion to further close the performance gap against the full-data counterpart. Extensive experiments on ImageNet, partial ImageNet, data-free setting and other downstream tasks prove the superiority of DearKD over its baselines and state-of-the-art methods.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subjectDeep learning architectures and techniques-
dc.subjectOptimization methods-
dc.titleDearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR52688.2022.01174-
dc.identifier.scopuseid_2-s2.0-85134872958-
dc.identifier.volume2022-June-
dc.identifier.spage12042-
dc.identifier.epage12052-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats