File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Align Representations with Base: A New Approach to Self-Supervised Learning

TitleAlign Representations with Base: A New Approach to Self-Supervised Learning
Authors
KeywordsRepresentation learning
Self-& semi-& meta- & unsupervised learning
Issue Date2022
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 16579-16588 How to Cite?
AbstractExisting symmetric contrastive learning methods suffer from collapses (complete and dimensional) or quadratic complexity of objectives. Departure from these methods which maximize mutual information of two generated views, along either instance or feature dimension, the proposed paradigm introduces intermediate variables at the feature level, and maximizes the consistency between variables and representations of each view. Specifically, the proposed intermediate variables are the nearest group of base vectors to representations. Hence, we call the proposed method ARB (Align Representations with Base). Compared with other symmetric approaches, ARB 1) does not require negative pairs, which leads the complexity of the overall objective function is in linear order, 2) reduces feature redundancy, increasing the information density of training samples, 3) is more robust to output dimension size, which out-performs previous feature-wise arts over 28% Top-1 accuracy on ImageNet-100under low-dimension settings.
Persistent Identifierhttp://hdl.handle.net/10722/351450
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorZhang, Shaofeng-
dc.contributor.authorQiu, Lyn-
dc.contributor.authorZhu, Feng-
dc.contributor.authorYan, Junchi-
dc.contributor.authorZhang, Hengrui-
dc.contributor.authorZhao, Rui-
dc.contributor.authorLi, Hongyang-
dc.contributor.authorYang, Xiaokang-
dc.date.accessioned2024-11-20T03:56:21Z-
dc.date.available2024-11-20T03:56:21Z-
dc.date.issued2022-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022, v. 2022-June, p. 16579-16588-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/351450-
dc.description.abstractExisting symmetric contrastive learning methods suffer from collapses (complete and dimensional) or quadratic complexity of objectives. Departure from these methods which maximize mutual information of two generated views, along either instance or feature dimension, the proposed paradigm introduces intermediate variables at the feature level, and maximizes the consistency between variables and representations of each view. Specifically, the proposed intermediate variables are the nearest group of base vectors to representations. Hence, we call the proposed method ARB (Align Representations with Base). Compared with other symmetric approaches, ARB 1) does not require negative pairs, which leads the complexity of the overall objective function is in linear order, 2) reduces feature redundancy, increasing the information density of training samples, 3) is more robust to output dimension size, which out-performs previous feature-wise arts over 28% Top-1 accuracy on ImageNet-100under low-dimension settings.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subjectRepresentation learning-
dc.subjectSelf-& semi-& meta- & unsupervised learning-
dc.titleAlign Representations with Base: A New Approach to Self-Supervised Learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR52688.2022.01610-
dc.identifier.scopuseid_2-s2.0-85137150658-
dc.identifier.volume2022-June-
dc.identifier.spage16579-
dc.identifier.epage16588-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats