File Download

There are no files associated with this item.

Supplementary

Conference Paper: Revitalizing CNN attention via transformers in self-supervised visual representation learning

TitleRevitalizing CNN attention via transformers in self-supervised visual representation learning
Authors
Issue Date2021
PublisherNeural Information Processing Systems Foundation.
Citation
35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 4193-4206 How to Cite?
AbstractStudies on self-supervised visual representation learning (SSL) improve encoder backbones to discriminate training samples without labels. While CNN encoders via SSL achieve comparable recognition performance to those via supervised learning, their network attention is under-explored for further improvement. Motivated by the transformers that explore visual attention effectively in recognition scenarios, we propose a CNN Attention REvitalization (CARE) framework to train attentive CNNencoders guided by transformers in SSL. The proposed CARE framework consists of a CNN stream (C-stream) and a transformer stream (T-stream), where each stream contains two branches. C-stream follows an existing SSL framework with two CNN encoders, two projectors, and a predictor. T-stream contains two transformers, two projectors, and a predictor. T-stream connects to CNN encoders and is in parallel to the remaining C-Stream. During training, we perform SSL in both streams simultaneously and use the T-stream output to supervise C-stream. The features from CNN encoders are modulated in T-stream for visual attention enhancement and become suitable for the SSL scenario. We use these modulated features to supervise C-stream for learning attentive CNN encoders. To this end, we revitalize CNN attention by using transformers as guidance. Experiments on several standard visual recognition benchmarks, including image classification, object detection, and semantic segmentation, show that the proposed CARE framework improves CNN encoder backbones to the state-of-the-art performance.
Persistent Identifierhttp://hdl.handle.net/10722/315860
ISBN

 

DC FieldValueLanguage
dc.contributor.authorGe, C-
dc.contributor.authorLiang, Y-
dc.contributor.authorSong, Y-
dc.contributor.authorJiao, J-
dc.contributor.authorWang, J-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T09:05:45Z-
dc.date.available2022-08-19T09:05:45Z-
dc.date.issued2021-
dc.identifier.citation35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 4193-4206-
dc.identifier.isbn9781713845393-
dc.identifier.urihttp://hdl.handle.net/10722/315860-
dc.description.abstractStudies on self-supervised visual representation learning (SSL) improve encoder backbones to discriminate training samples without labels. While CNN encoders via SSL achieve comparable recognition performance to those via supervised learning, their network attention is under-explored for further improvement. Motivated by the transformers that explore visual attention effectively in recognition scenarios, we propose a CNN Attention REvitalization (CARE) framework to train attentive CNNencoders guided by transformers in SSL. The proposed CARE framework consists of a CNN stream (C-stream) and a transformer stream (T-stream), where each stream contains two branches. C-stream follows an existing SSL framework with two CNN encoders, two projectors, and a predictor. T-stream contains two transformers, two projectors, and a predictor. T-stream connects to CNN encoders and is in parallel to the remaining C-Stream. During training, we perform SSL in both streams simultaneously and use the T-stream output to supervise C-stream. The features from CNN encoders are modulated in T-stream for visual attention enhancement and become suitable for the SSL scenario. We use these modulated features to supervise C-stream for learning attentive CNN encoders. To this end, we revitalize CNN attention by using transformers as guidance. Experiments on several standard visual recognition benchmarks, including image classification, object detection, and semantic segmentation, show that the proposed CARE framework improves CNN encoder backbones to the state-of-the-art performance.-
dc.languageeng-
dc.publisherNeural Information Processing Systems Foundation.-
dc.relation.ispartofAdvances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021)-
dc.titleRevitalizing CNN attention via transformers in self-supervised visual representation learning-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335593-
dc.identifier.spage4193-
dc.identifier.epage4206-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats