File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR52729.2023.02177
- Scopus: eid_2-s2.0-85162605329
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Stare at What You See: Masked Image Modeling without Reconstruction
Title | Stare at What You See: Masked Image Modeling without Reconstruction |
---|---|
Authors | |
Keywords | Self-supervised or unsupervised representation learning |
Issue Date | 2023 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, v. 2023-June, p. 22732-22741 How to Cite? |
Abstract | Masked Autoencoders (MAE) have been prevailing paradigms for large-scale vision representation pretraining. By reconstructing masked image patches from a small portion of visible image regions, MAE forces the model to infer semantic correlation within an image. Recently, some approaches apply semantic-rich teacher models to extract image features as the reconstruction target, leading to better performance. However, unlike the low-level features such as pixel values, we argue the features extracted by powerful teacher models already encode rich semantic correlation across regions in an intact image. This raises one question: is reconstruction necessary in Masked Image Modeling (MIM) with a teacher model? In this paper, we propose an efficient MIM paradigm named MaskAlign. MaskAlign simply learns the consistency of visible patch features extracted by the student model and intact image features extracted by the teacher model. To further advance the performance and tackle the problem of input inconsistency between the student and teacher model, we propose a Dynamic Alignment (DA) module to apply learnable alignment. Our experimental results demonstrate that masked modeling does not lose effectiveness even without reconstruction on masked regions. Combined with Dynamic Alignment, MaskAlign can achieve state-of-the-art performance with much higher efficiency. Code and models will be available at https://github.com/OpenPerceptionX/maskalign. |
Persistent Identifier | http://hdl.handle.net/10722/351468 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xue, Hongwei | - |
dc.contributor.author | Gao, Peng | - |
dc.contributor.author | Li, Hongyang | - |
dc.contributor.author | Qiao, Yu | - |
dc.contributor.author | Sun, Hao | - |
dc.contributor.author | Li, Houqiang | - |
dc.contributor.author | Luo, Jiebo | - |
dc.date.accessioned | 2024-11-20T03:56:27Z | - |
dc.date.available | 2024-11-20T03:56:27Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2023, v. 2023-June, p. 22732-22741 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351468 | - |
dc.description.abstract | Masked Autoencoders (MAE) have been prevailing paradigms for large-scale vision representation pretraining. By reconstructing masked image patches from a small portion of visible image regions, MAE forces the model to infer semantic correlation within an image. Recently, some approaches apply semantic-rich teacher models to extract image features as the reconstruction target, leading to better performance. However, unlike the low-level features such as pixel values, we argue the features extracted by powerful teacher models already encode rich semantic correlation across regions in an intact image. This raises one question: is reconstruction necessary in Masked Image Modeling (MIM) with a teacher model? In this paper, we propose an efficient MIM paradigm named MaskAlign. MaskAlign simply learns the consistency of visible patch features extracted by the student model and intact image features extracted by the teacher model. To further advance the performance and tackle the problem of input inconsistency between the student and teacher model, we propose a Dynamic Alignment (DA) module to apply learnable alignment. Our experimental results demonstrate that masked modeling does not lose effectiveness even without reconstruction on masked regions. Combined with Dynamic Alignment, MaskAlign can achieve state-of-the-art performance with much higher efficiency. Code and models will be available at https://github.com/OpenPerceptionX/maskalign. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.subject | Self-supervised or unsupervised representation learning | - |
dc.title | Stare at What You See: Masked Image Modeling without Reconstruction | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR52729.2023.02177 | - |
dc.identifier.scopus | eid_2-s2.0-85162605329 | - |
dc.identifier.volume | 2023-June | - |
dc.identifier.spage | 22732 | - |
dc.identifier.epage | 22741 | - |