File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Any region can be perceived equally and effectively on rotation pretext task using full rotation and weighted-region mixture

TitleAny region can be perceived equally and effectively on rotation pretext task using full rotation and weighted-region mixture
Authors
KeywordsData mixing
Full rotation
Self-supervised learning
Vision impairment
Issue Date2024
Citation
Neural Networks, 2024, v. 176, article no. 106350 How to Cite?
AbstractIn recent years, self-supervised learning has emerged as a powerful approach to learning visual representations without requiring extensive manual annotation. One popular technique involves using rotation transformations of images, which provide a clear visual signal for learning semantic representation. However, in this work, we revisit the pretext task of predicting image rotation in self-supervised learning and discover that it tends to marginalise the perception of features located near the centre of an image. To address this limitation, we propose a new self-supervised learning method, namely FullRot, which spotlights underrated regions by resizing the randomly selected and cropped regions of images. Moreover, FullRot increases the complexity of the rotation pretext task by applying the degree-free rotation to the region cropped into a circle. To encourage models to learn from different general parts of an image, we introduce a new data mixture technique called WRMix, which merges two random intra-image patches. By combining these innovative crop and rotation methods with the data mixture scheme, our approach, FullRot + WRMix, surpasses the state-of-the-art self-supervision methods in classification, segmentation, and object detection tasks on ten benchmark datasets with an improvement of up to +13.98% accuracy on STL-10, +8.56% accuracy on CIFAR-10, +10.20% accuracy on Sports-100, +15.86% accuracy on Mammals-45, +15.15% accuracy on PAD-UFES-20, +32.44% mIoU on VOC 2012, +7.62% mIoU on ISIC 2018, +9.70% mIoU on FloodArea, +25.16% AP50 on VOC 2007, and +58.69% AP50 on UTDAC 2020. The code is available at https://github.com/anthonyweidai/FullRot_WRMix.
Persistent Identifierhttp://hdl.handle.net/10722/350070
ISSN
2023 Impact Factor: 6.0
2023 SCImago Journal Rankings: 2.605

 

DC FieldValueLanguage
dc.contributor.authorDai, Wei-
dc.contributor.authorWu, Tianyi-
dc.contributor.authorLiu, Rui-
dc.contributor.authorWang, Min-
dc.contributor.authorYin, Jianqin-
dc.contributor.authorLiu, Jun-
dc.date.accessioned2024-10-17T07:02:52Z-
dc.date.available2024-10-17T07:02:52Z-
dc.date.issued2024-
dc.identifier.citationNeural Networks, 2024, v. 176, article no. 106350-
dc.identifier.issn0893-6080-
dc.identifier.urihttp://hdl.handle.net/10722/350070-
dc.description.abstractIn recent years, self-supervised learning has emerged as a powerful approach to learning visual representations without requiring extensive manual annotation. One popular technique involves using rotation transformations of images, which provide a clear visual signal for learning semantic representation. However, in this work, we revisit the pretext task of predicting image rotation in self-supervised learning and discover that it tends to marginalise the perception of features located near the centre of an image. To address this limitation, we propose a new self-supervised learning method, namely FullRot, which spotlights underrated regions by resizing the randomly selected and cropped regions of images. Moreover, FullRot increases the complexity of the rotation pretext task by applying the degree-free rotation to the region cropped into a circle. To encourage models to learn from different general parts of an image, we introduce a new data mixture technique called WRMix, which merges two random intra-image patches. By combining these innovative crop and rotation methods with the data mixture scheme, our approach, FullRot + WRMix, surpasses the state-of-the-art self-supervision methods in classification, segmentation, and object detection tasks on ten benchmark datasets with an improvement of up to +13.98% accuracy on STL-10, +8.56% accuracy on CIFAR-10, +10.20% accuracy on Sports-100, +15.86% accuracy on Mammals-45, +15.15% accuracy on PAD-UFES-20, +32.44% mIoU on VOC 2012, +7.62% mIoU on ISIC 2018, +9.70% mIoU on FloodArea, +25.16% AP50 on VOC 2007, and +58.69% AP50 on UTDAC 2020. The code is available at https://github.com/anthonyweidai/FullRot_WRMix.-
dc.languageeng-
dc.relation.ispartofNeural Networks-
dc.subjectData mixing-
dc.subjectFull rotation-
dc.subjectSelf-supervised learning-
dc.subjectVision impairment-
dc.titleAny region can be perceived equally and effectively on rotation pretext task using full rotation and weighted-region mixture-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.neunet.2024.106350-
dc.identifier.pmid38723309-
dc.identifier.scopuseid_2-s2.0-85192291180-
dc.identifier.volume176-
dc.identifier.spagearticle no. 106350-
dc.identifier.epagearticle no. 106350-
dc.identifier.eissn1879-2782-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats