File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2025.3528453
- Scopus: eid_2-s2.0-86000338589
- PMID: 40031040
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: UniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation
| Title | UniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation |
|---|---|
| Authors | |
| Keywords | semantic segmentation Semi-supervised learning vision transformer weak-to-strong consistency |
| Issue Date | 13-Jan-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, v. 47, n. 4, p. 3031-3048 How to Cite? |
| Abstract | Semi-supervised semantic segmentation (SSS) aims at learning rich visual knowledge from cheap unlabeled images to enhance semantic segmentation capability. Among recent works, UniMatch (Yang et al. 2023) improves its precedents tremendously by amplifying the practice of weak-to-strong consistency regularization. Subsequent works typically follow similar pipelines and propose various delicate designs. Despite the achieved progress, strangely, even in this flourishing era of numerous powerful vision models, almost all SSS works are still sticking to 1) using outdated ResNet encoders with small-scale ImageNet-1 K pre-training, and 2) evaluation on simple Pascal and Cityscapes datasets. In this work, we argue that, it is necessary to switch the baseline of SSS from ResNet-based encoders to more capable ViT-based encoders (e.g., DINOv2) that are pre-trained on massive data. A simple update on the encoder (even using 2× fewer parameters) can bring more significant improvement than careful method designs. Built on this competitive baseline, we present our upgraded and simplified UniMatch V2, inheriting the core spirit of weak-to-strong consistency from V1, but requiring less training cost and providing consistently better results. Additionally, witnessing the gradually saturated performance on Pascal and Cityscapes, we appeal that we should focus on more challenging benchmarks with complex taxonomy, such as ADE20K and COCO datasets. |
| Persistent Identifier | http://hdl.handle.net/10722/362094 |
| ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yang, Lihe | - |
| dc.contributor.author | Zhao, Zhen | - |
| dc.contributor.author | Zhao, Hengshuang | - |
| dc.date.accessioned | 2025-09-19T00:31:51Z | - |
| dc.date.available | 2025-09-19T00:31:51Z | - |
| dc.date.issued | 2025-01-13 | - |
| dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, v. 47, n. 4, p. 3031-3048 | - |
| dc.identifier.issn | 0162-8828 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362094 | - |
| dc.description.abstract | <p>Semi-supervised semantic segmentation (SSS) aims at learning rich visual knowledge from cheap unlabeled images to enhance semantic segmentation capability. Among recent works, UniMatch (Yang et al. 2023) improves its precedents tremendously by amplifying the practice of weak-to-strong consistency regularization. Subsequent works typically follow similar pipelines and propose various delicate designs. Despite the achieved progress, strangely, even in this flourishing era of numerous powerful vision models, almost all SSS works are still sticking to 1) using outdated ResNet encoders with small-scale ImageNet-1 K pre-training, and 2) evaluation on simple Pascal and Cityscapes datasets. In this work, we argue that, it is necessary to switch the baseline of SSS from ResNet-based encoders to more capable ViT-based encoders (e.g., DINOv2) that are pre-trained on massive data. A simple update on the encoder (even using 2× fewer parameters) can bring more significant improvement than careful method designs. Built on this competitive baseline, we present our upgraded and simplified UniMatch V2, inheriting the core spirit of weak-to-strong consistency from V1, but requiring less training cost and providing consistently better results. Additionally, witnessing the gradually saturated performance on Pascal and Cityscapes, we appeal that we should focus on more challenging benchmarks with complex taxonomy, such as ADE20K and COCO datasets.<br></p> | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | semantic segmentation | - |
| dc.subject | Semi-supervised learning | - |
| dc.subject | vision transformer | - |
| dc.subject | weak-to-strong consistency | - |
| dc.title | UniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TPAMI.2025.3528453 | - |
| dc.identifier.pmid | 40031040 | - |
| dc.identifier.scopus | eid_2-s2.0-86000338589 | - |
| dc.identifier.volume | 47 | - |
| dc.identifier.issue | 4 | - |
| dc.identifier.spage | 3031 | - |
| dc.identifier.epage | 3048 | - |
| dc.identifier.eissn | 1939-3539 | - |
| dc.identifier.issnl | 0162-8828 | - |
