File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Any region can be perceived equally and effectively on rotation pretext task using full rotation and weighted-region mixture

TitleAny region can be perceived equally and effectively on rotation pretext task using full rotation and weighted-region mixture
Authors
Issue Date1-Aug-2024
PublisherElsevier
Citation
Neural Networks, 2024, v. 176 How to Cite?
Abstract

In recent years, self-supervised learning has emerged as a powerful approach to learning visual representations without requiring extensive manual annotation. One popular technique involves using rotation transformations of images, which provide a clear visual signal for learning semantic representation. However, in this work, we revisit the pretext task of predicting image rotation in self-supervised learning and discover that it tends to marginalise the perception of features located near the centre of an image. To address this limitation, we propose a new self-supervised learning method, namely FullRot, which spotlights underrated regions by resizing the randomly selected and cropped regions of images. Moreover, FullRot increases the complexity of the rotation pretext task by applying the degree-free rotation to the region cropped into a circle. To encourage models to learn from different general parts of an image, we introduce a new data mixture technique called WRMix, which merges two random intra-image patches. By combining these innovative crop and rotation methods with the data mixture scheme, our approach, FullRot + WRMix, surpasses the state-of-the-art self-supervision methods in classification, segmentation, and object detection tasks on ten benchmark datasets with an improvement of up to +13.98% accuracy on STL-10, +8.56% accuracy on CIFAR-10, +10.20% accuracy on Sports-100, +15.86% accuracy on Mammals-45, +15.15% accuracy on PAD-UFES-20, +32.44% mIoU on VOC 2012, +7.62% mIoU on ISIC 2018, +9.70% mIoU on FloodArea, +25.16% AP50 on VOC 2007, and +58.69% AP50 on UTDAC 2020. The code is available at https://github.com/anthonyweidai/FullRot_WRMix. 


Persistent Identifierhttp://hdl.handle.net/10722/347747
ISSN
2023 Impact Factor: 6.0
2023 SCImago Journal Rankings: 2.605

 

DC FieldValueLanguage
dc.contributor.authorDai, Wei-
dc.contributor.authorWu, Tianyi-
dc.contributor.authorLiu, Rui-
dc.contributor.authorWang, Min-
dc.contributor.authorYin, Jianqin-
dc.contributor.authorLiu, Jun-
dc.date.accessioned2024-09-28T00:30:20Z-
dc.date.available2024-09-28T00:30:20Z-
dc.date.issued2024-08-01-
dc.identifier.citationNeural Networks, 2024, v. 176-
dc.identifier.issn0893-6080-
dc.identifier.urihttp://hdl.handle.net/10722/347747-
dc.description.abstract<p>In recent years, self-supervised learning has emerged as a powerful approach to learning visual representations without requiring extensive manual annotation. One popular technique involves using rotation transformations of images, which provide a clear visual signal for learning semantic representation. However, in this work, we revisit the pretext task of predicting image rotation in self-supervised learning and discover that it tends to marginalise the perception of features located near the centre of an image. To address this limitation, we propose a new self-supervised learning method, namely FullRot, which spotlights underrated regions by resizing the randomly selected and cropped regions of images. Moreover, FullRot increases the complexity of the rotation pretext task by applying the degree-free rotation to the region cropped into a circle. To encourage models to learn from different general parts of an image, we introduce a new data mixture technique called WRMix, which merges two random intra-image patches. By combining these innovative crop and rotation methods with the data mixture scheme, our approach, FullRot + WRMix, surpasses the state-of-the-art self-supervision methods in classification, segmentation, and object detection tasks on ten benchmark datasets with an improvement of up to +13.98% accuracy on STL-10, +8.56% accuracy on CIFAR-10, +10.20% accuracy on Sports-100, +15.86% accuracy on Mammals-45, +15.15% accuracy on PAD-UFES-20, +32.44% mIoU on VOC 2012, +7.62% mIoU on ISIC 2018, +9.70% mIoU on FloodArea, +25.16% AP<sub>50</sub> on VOC 2007, and +58.69% AP<sub>50</sub> on UTDAC 2020. The code is available at https://github.com/anthonyweidai/FullRot_WRMix. <br></p>-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofNeural Networks-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleAny region can be perceived equally and effectively on rotation pretext task using full rotation and weighted-region mixture-
dc.typeArticle-
dc.identifier.doi10.1016/j.neunet.2024.106350-
dc.identifier.volume176-
dc.identifier.eissn1879-2782-
dc.identifier.issnl0893-6080-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats