File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR46437.2021.00177
- Scopus: eid_2-s2.0-85122390251
- WOS: WOS:000739917301091
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation
Title | One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation |
---|---|
Authors | |
Issue Date | 2021 |
Publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147 |
Citation | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20-25 June 2021, p. 1726-1736 How to Cite? |
Abstract | Point cloud semantic segmentation often requires largescale annotated training data, but clearly, point-wise labels are too tedious to prepare. While some recent methods propose to train a 3D network with small percentages of point labels, we take the approach to an extreme and propose 'One Thing One Click,' meaning that the annotator only needs to label one point per object. To leverage these extremely sparse labels in network training, we design a novel self-training approach, in which we iteratively conduct the training and label propagation, facilitated by a graph propagation module. Also, we adopt a relation network to generate per-category prototype and explicitly model the similarity among graph nodes to generate pseudo labels to guide the iterative training. Experimental results on both ScanNet-v2 and S3DIS show that our self-training approach, with extremely-sparse annotations, outperforms all existing weakly supervised methods for 3D semantic segmentation by a large margin, and our results are also comparable to those of the fully supervised counterparts. |
Persistent Identifier | http://hdl.handle.net/10722/306758 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Z | - |
dc.contributor.author | Qi, X | - |
dc.contributor.author | Fu, C | - |
dc.date.accessioned | 2021-10-22T07:39:11Z | - |
dc.date.available | 2021-10-22T07:39:11Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20-25 June 2021, p. 1726-1736 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/306758 | - |
dc.description.abstract | Point cloud semantic segmentation often requires largescale annotated training data, but clearly, point-wise labels are too tedious to prepare. While some recent methods propose to train a 3D network with small percentages of point labels, we take the approach to an extreme and propose 'One Thing One Click,' meaning that the annotator only needs to label one point per object. To leverage these extremely sparse labels in network training, we design a novel self-training approach, in which we iteratively conduct the training and label propagation, facilitated by a graph propagation module. Also, we adopt a relation network to generate per-category prototype and explicitly model the similarity among graph nodes to generate pseudo labels to guide the iterative training. Experimental results on both ScanNet-v2 and S3DIS show that our self-training approach, with extremely-sparse annotations, outperforms all existing weakly supervised methods for 3D semantic segmentation by a large margin, and our results are also comparable to those of the fully supervised counterparts. | - |
dc.language | eng | - |
dc.publisher | IEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147 | - |
dc.relation.ispartof | IEEE Conference on Computer Vision and Pattern Recognition. Proceedings | - |
dc.rights | IEEE Conference on Computer Vision and Pattern Recognition. Proceedings. Copyright © IEEE Computer Society. | - |
dc.rights | ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.title | One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Qi, X: xjqi@eee.hku.hk | - |
dc.identifier.authority | Qi, X=rp02666 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR46437.2021.00177 | - |
dc.identifier.scopus | eid_2-s2.0-85122390251 | - |
dc.identifier.hkuros | 328734 | - |
dc.identifier.spage | 1726 | - |
dc.identifier.epage | 1736 | - |
dc.identifier.isi | WOS:000739917301091 | - |
dc.publisher.place | United States | - |