File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: ConvNets vs. Transformers: Whose Visual Representations are More Transferable?

TitleConvNets vs. Transformers: Whose Visual Representations are More Transferable?
Authors
KeywordsPerformance evaluation
Computer vision
Visualization
Face recognition
Conferences
Issue Date2021
PublisherIEEE Computer Society.
Citation
ICCV Workshop on Deep Multi-Task Learning in Computer Vision (Virtual), Montreal, QC, Canada, October 11-17, 2021. In Proceedings: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021), p. 2230-2238 How to Cite?
AbstractVision transformers have attracted much attention from computer vision researchers as they are not restricted to the spatial inductive bias of ConvNets. However, although Transformer-based backbones have achieved much progress on ImageNet classification, it is still unclear whether the learned representations are as transferable as or even more transferable than ConvNets’ features. To address this point, we systematically investigate the transfer learning ability of ConvNets and vision transformers in 15 single-task and multi-task performance evaluations. We observe consistent advantages of Transformer-based backbones on 13 downstream tasks (out of 15), including but not limited to fine-grained classification, scene recognition (classification, segmentation and depth estimation), opendomain classification, face recognition, etc. More specifically, we find that two ViT models heavily rely on whole network fine-tuning to achieve performance gains while Swin Transformer does not have such a requirement. Moreover, vision transformers behave more robustly in multi-task learning, i.e., bringing more improvements when managing mutually beneficial tasks and reducing performance losses when tackling irrelevant tasks. We hope our discoveries can facilitate the exploration and exploitation of vision transformers in the future.
Persistent Identifierhttp://hdl.handle.net/10722/316357
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZHOU, H-
dc.contributor.authorLu, C-
dc.contributor.authorYang, S-
dc.contributor.authorYu, Y-
dc.date.accessioned2022-09-02T06:10:03Z-
dc.date.available2022-09-02T06:10:03Z-
dc.date.issued2021-
dc.identifier.citationICCV Workshop on Deep Multi-Task Learning in Computer Vision (Virtual), Montreal, QC, Canada, October 11-17, 2021. In Proceedings: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021), p. 2230-2238-
dc.identifier.urihttp://hdl.handle.net/10722/316357-
dc.description.abstractVision transformers have attracted much attention from computer vision researchers as they are not restricted to the spatial inductive bias of ConvNets. However, although Transformer-based backbones have achieved much progress on ImageNet classification, it is still unclear whether the learned representations are as transferable as or even more transferable than ConvNets’ features. To address this point, we systematically investigate the transfer learning ability of ConvNets and vision transformers in 15 single-task and multi-task performance evaluations. We observe consistent advantages of Transformer-based backbones on 13 downstream tasks (out of 15), including but not limited to fine-grained classification, scene recognition (classification, segmentation and depth estimation), opendomain classification, face recognition, etc. More specifically, we find that two ViT models heavily rely on whole network fine-tuning to achieve performance gains while Swin Transformer does not have such a requirement. Moreover, vision transformers behave more robustly in multi-task learning, i.e., bringing more improvements when managing mutually beneficial tasks and reducing performance losses when tackling irrelevant tasks. We hope our discoveries can facilitate the exploration and exploitation of vision transformers in the future.-
dc.languageeng-
dc.publisherIEEE Computer Society.-
dc.relation.ispartofProceedings: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2021)-
dc.subjectPerformance evaluation-
dc.subjectComputer vision-
dc.subjectVisualization-
dc.subjectFace recognition-
dc.subjectConferences-
dc.titleConvNets vs. Transformers: Whose Visual Representations are More Transferable?-
dc.typeConference_Paper-
dc.identifier.emailYu, Y: yzyu@cs.hku.hk-
dc.identifier.authorityYu, Y=rp01415-
dc.identifier.doi10.1109/ICCVW54120.2021.00252-
dc.identifier.hkuros336340-
dc.identifier.spage2230-
dc.identifier.epage2238-
dc.identifier.isiWOS:000739651102035-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats