File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Stare at What You See: Masked Image Modeling without Reconstruction

TitleStare at What You See: Masked Image Modeling without Reconstruction
Authors
KeywordsSelf-supervised or unsupervised representation learning
Issue Date2023
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, v. 2023-June, p. 22732-22741 How to Cite?
AbstractMasked Autoencoders (MAE) have been prevailing paradigms for large-scale vision representation pretraining. By reconstructing masked image patches from a small portion of visible image regions, MAE forces the model to infer semantic correlation within an image. Recently, some approaches apply semantic-rich teacher models to extract image features as the reconstruction target, leading to better performance. However, unlike the low-level features such as pixel values, we argue the features extracted by powerful teacher models already encode rich semantic correlation across regions in an intact image. This raises one question: is reconstruction necessary in Masked Image Modeling (MIM) with a teacher model? In this paper, we propose an efficient MIM paradigm named MaskAlign. MaskAlign simply learns the consistency of visible patch features extracted by the student model and intact image features extracted by the teacher model. To further advance the performance and tackle the problem of input inconsistency between the student and teacher model, we propose a Dynamic Alignment (DA) module to apply learnable alignment. Our experimental results demonstrate that masked modeling does not lose effectiveness even without reconstruction on masked regions. Combined with Dynamic Alignment, MaskAlign can achieve state-of-the-art performance with much higher efficiency. Code and models will be available at https://github.com/OpenPerceptionX/maskalign.
Persistent Identifierhttp://hdl.handle.net/10722/351468
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorXue, Hongwei-
dc.contributor.authorGao, Peng-
dc.contributor.authorLi, Hongyang-
dc.contributor.authorQiao, Yu-
dc.contributor.authorSun, Hao-
dc.contributor.authorLi, Houqiang-
dc.contributor.authorLuo, Jiebo-
dc.date.accessioned2024-11-20T03:56:27Z-
dc.date.available2024-11-20T03:56:27Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, v. 2023-June, p. 22732-22741-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/351468-
dc.description.abstractMasked Autoencoders (MAE) have been prevailing paradigms for large-scale vision representation pretraining. By reconstructing masked image patches from a small portion of visible image regions, MAE forces the model to infer semantic correlation within an image. Recently, some approaches apply semantic-rich teacher models to extract image features as the reconstruction target, leading to better performance. However, unlike the low-level features such as pixel values, we argue the features extracted by powerful teacher models already encode rich semantic correlation across regions in an intact image. This raises one question: is reconstruction necessary in Masked Image Modeling (MIM) with a teacher model? In this paper, we propose an efficient MIM paradigm named MaskAlign. MaskAlign simply learns the consistency of visible patch features extracted by the student model and intact image features extracted by the teacher model. To further advance the performance and tackle the problem of input inconsistency between the student and teacher model, we propose a Dynamic Alignment (DA) module to apply learnable alignment. Our experimental results demonstrate that masked modeling does not lose effectiveness even without reconstruction on masked regions. Combined with Dynamic Alignment, MaskAlign can achieve state-of-the-art performance with much higher efficiency. Code and models will be available at https://github.com/OpenPerceptionX/maskalign.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subjectSelf-supervised or unsupervised representation learning-
dc.titleStare at What You See: Masked Image Modeling without Reconstruction-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR52729.2023.02177-
dc.identifier.scopuseid_2-s2.0-85162605329-
dc.identifier.volume2023-June-
dc.identifier.spage22732-
dc.identifier.epage22741-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats