File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: UniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation

TitleUniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation
Authors
Keywordssemantic segmentation
Semi-supervised learning
vision transformer
weak-to-strong consistency
Issue Date13-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, v. 47, n. 4, p. 3031-3048 How to Cite?
Abstract

Semi-supervised semantic segmentation (SSS) aims at learning rich visual knowledge from cheap unlabeled images to enhance semantic segmentation capability. Among recent works, UniMatch (Yang et al. 2023) improves its precedents tremendously by amplifying the practice of weak-to-strong consistency regularization. Subsequent works typically follow similar pipelines and propose various delicate designs. Despite the achieved progress, strangely, even in this flourishing era of numerous powerful vision models, almost all SSS works are still sticking to 1) using outdated ResNet encoders with small-scale ImageNet-1 K pre-training, and 2) evaluation on simple Pascal and Cityscapes datasets. In this work, we argue that, it is necessary to switch the baseline of SSS from ResNet-based encoders to more capable ViT-based encoders (e.g., DINOv2) that are pre-trained on massive data. A simple update on the encoder (even using 2× fewer parameters) can bring more significant improvement than careful method designs. Built on this competitive baseline, we present our upgraded and simplified UniMatch V2, inheriting the core spirit of weak-to-strong consistency from V1, but requiring less training cost and providing consistently better results. Additionally, witnessing the gradually saturated performance on Pascal and Cityscapes, we appeal that we should focus on more challenging benchmarks with complex taxonomy, such as ADE20K and COCO datasets.


Persistent Identifierhttp://hdl.handle.net/10722/362094
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorYang, Lihe-
dc.contributor.authorZhao, Zhen-
dc.contributor.authorZhao, Hengshuang-
dc.date.accessioned2025-09-19T00:31:51Z-
dc.date.available2025-09-19T00:31:51Z-
dc.date.issued2025-01-13-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, v. 47, n. 4, p. 3031-3048-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/362094-
dc.description.abstract<p>Semi-supervised semantic segmentation (SSS) aims at learning rich visual knowledge from cheap unlabeled images to enhance semantic segmentation capability. Among recent works, UniMatch (Yang et al. 2023) improves its precedents tremendously by amplifying the practice of weak-to-strong consistency regularization. Subsequent works typically follow similar pipelines and propose various delicate designs. Despite the achieved progress, strangely, even in this flourishing era of numerous powerful vision models, almost all SSS works are still sticking to 1) using outdated ResNet encoders with small-scale ImageNet-1 K pre-training, and 2) evaluation on simple Pascal and Cityscapes datasets. In this work, we argue that, it is necessary to switch the baseline of SSS from ResNet-based encoders to more capable ViT-based encoders (e.g., DINOv2) that are pre-trained on massive data. A simple update on the encoder (even using 2× fewer parameters) can bring more significant improvement than careful method designs. Built on this competitive baseline, we present our upgraded and simplified UniMatch V2, inheriting the core spirit of weak-to-strong consistency from V1, but requiring less training cost and providing consistently better results. Additionally, witnessing the gradually saturated performance on Pascal and Cityscapes, we appeal that we should focus on more challenging benchmarks with complex taxonomy, such as ADE20K and COCO datasets.<br></p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectsemantic segmentation-
dc.subjectSemi-supervised learning-
dc.subjectvision transformer-
dc.subjectweak-to-strong consistency-
dc.titleUniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation-
dc.typeArticle-
dc.identifier.doi10.1109/TPAMI.2025.3528453-
dc.identifier.pmid40031040-
dc.identifier.scopuseid_2-s2.0-86000338589-
dc.identifier.volume47-
dc.identifier.issue4-
dc.identifier.spage3031-
dc.identifier.epage3048-
dc.identifier.eissn1939-3539-
dc.identifier.issnl0162-8828-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats