File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Prototype-Voxel Contrastive Learning for LiDAR Point Cloud Panoptic Segmentation

TitlePrototype-Voxel Contrastive Learning for LiDAR Point Cloud Panoptic Segmentation
Authors
Issue Date2022
Citation
Proceedings - IEEE International Conference on Robotics and Automation, 2022, p. 9243-9250 How to Cite?
AbstractLiDAR point cloud panoptic segmentation, including both semantic and instance segmentation, plays a critical role in meticulous scene understanding for autonomous driving. Existing 3D voxelized approaches either utilize 3D sparse convolution that only focuses on local scene understanding, or add extra and time-consuming PointNet branch to capture global feature structures. To address these limitations, we propose an end-to-end Prototype-Voxel Contrastive Learning (PVCL) framework for learning stable and discriminative semantic representations, which includes voxel-level and prototype-level contrastive learning (CL). The voxel-level CL decreases intra-class distance and increases inter-class distance among sample representations, while the prototype-level CL further reduces the dependence of CL on negative sampling and avoids the influence of outliers from the same class, enabling PVCL to be more effective for outdoor point cloud panoptic segmentation. Extensive experiments are conducted on the public point cloud panoptic segmentation datasets, Semantic-KITTI and nuScenes, where evaluations and ablation studies demonstrate PVCL achieves superior performance compared with the state-of-the-art. Our approach ranks the top on the public leaderboard of Semantic-KITTI at the time of submission, and surpasses the published 2nd rank, EfficientLPS, by 1.7% in PQ.
Persistent Identifierhttp://hdl.handle.net/10722/333550
ISSN
2023 SCImago Journal Rankings: 1.620
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Minzhe-
dc.contributor.authorZhou, Qiang-
dc.contributor.authorZhao, Hengshuang-
dc.contributor.authorLi, Jianing-
dc.contributor.authorDu, Yuan-
dc.contributor.authorKeutzer, Kurt-
dc.contributor.authorDu, Li-
dc.contributor.authorZhang, Shanghang-
dc.date.accessioned2023-10-06T05:20:24Z-
dc.date.available2023-10-06T05:20:24Z-
dc.date.issued2022-
dc.identifier.citationProceedings - IEEE International Conference on Robotics and Automation, 2022, p. 9243-9250-
dc.identifier.issn1050-4729-
dc.identifier.urihttp://hdl.handle.net/10722/333550-
dc.description.abstractLiDAR point cloud panoptic segmentation, including both semantic and instance segmentation, plays a critical role in meticulous scene understanding for autonomous driving. Existing 3D voxelized approaches either utilize 3D sparse convolution that only focuses on local scene understanding, or add extra and time-consuming PointNet branch to capture global feature structures. To address these limitations, we propose an end-to-end Prototype-Voxel Contrastive Learning (PVCL) framework for learning stable and discriminative semantic representations, which includes voxel-level and prototype-level contrastive learning (CL). The voxel-level CL decreases intra-class distance and increases inter-class distance among sample representations, while the prototype-level CL further reduces the dependence of CL on negative sampling and avoids the influence of outliers from the same class, enabling PVCL to be more effective for outdoor point cloud panoptic segmentation. Extensive experiments are conducted on the public point cloud panoptic segmentation datasets, Semantic-KITTI and nuScenes, where evaluations and ablation studies demonstrate PVCL achieves superior performance compared with the state-of-the-art. Our approach ranks the top on the public leaderboard of Semantic-KITTI at the time of submission, and surpasses the published 2nd rank, EfficientLPS, by 1.7% in PQ.-
dc.languageeng-
dc.relation.ispartofProceedings - IEEE International Conference on Robotics and Automation-
dc.titlePrototype-Voxel Contrastive Learning for LiDAR Point Cloud Panoptic Segmentation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICRA46639.2022.9811638-
dc.identifier.scopuseid_2-s2.0-85136335837-
dc.identifier.spage9243-
dc.identifier.epage9250-
dc.identifier.isiWOS:000941277601135-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats