File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A Convolutional Neural Network for Point Cloud Instance Segmentation in Cluttered Scene Trained by Synthetic Data without Color

TitleA Convolutional Neural Network for Point Cloud Instance Segmentation in Cluttered Scene Trained by Synthetic Data without Color
Authors
KeywordsPoint cloud
Instance segmentation
Deep learning
Issue Date2020
Citation
IEEE Access, 2020, v. 8, p. 70262-70269 How to Cite?
Abstract3D Instance segmentation is a fundamental task in computer vision. Effective segmentation plays an important role in robotic tasks, augmented reality, autonomous driving, etc. With the ascendancy of convolutional neural networks in 2D image processing, the use of deep learning methods to segment 3D point clouds receives much attention. A great convergence of training loss often requires a large amount of human-annotated data, while making such a 3D dataset is time-consuming. This paper proposes a method for training convolutional neural networks to predict instance segmentation results using synthetic data. The proposed method is based on the SGPN framework. We replaced the original feature extractor with 'dynamic graph convolutional neural networks' that learned how to extract local geometric features and proposed a simple and effective loss function, making the network more focused on hard examples. We experimentally proved that the proposed method significantly outperforms the state-of-the-art method in both Stanford 3D Indoor Semantics Dataset and our datasets.
Persistent Identifierhttp://hdl.handle.net/10722/303021
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorXu, Yajun-
dc.contributor.authorArai, Shogo-
dc.contributor.authorTokuda, Fuyuki-
dc.contributor.authorKosuge, Kazuhiro-
dc.date.accessioned2021-09-07T08:43:02Z-
dc.date.available2021-09-07T08:43:02Z-
dc.date.issued2020-
dc.identifier.citationIEEE Access, 2020, v. 8, p. 70262-70269-
dc.identifier.urihttp://hdl.handle.net/10722/303021-
dc.description.abstract3D Instance segmentation is a fundamental task in computer vision. Effective segmentation plays an important role in robotic tasks, augmented reality, autonomous driving, etc. With the ascendancy of convolutional neural networks in 2D image processing, the use of deep learning methods to segment 3D point clouds receives much attention. A great convergence of training loss often requires a large amount of human-annotated data, while making such a 3D dataset is time-consuming. This paper proposes a method for training convolutional neural networks to predict instance segmentation results using synthetic data. The proposed method is based on the SGPN framework. We replaced the original feature extractor with 'dynamic graph convolutional neural networks' that learned how to extract local geometric features and proposed a simple and effective loss function, making the network more focused on hard examples. We experimentally proved that the proposed method significantly outperforms the state-of-the-art method in both Stanford 3D Indoor Semantics Dataset and our datasets.-
dc.languageeng-
dc.relation.ispartofIEEE Access-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectPoint cloud-
dc.subjectInstance segmentation-
dc.subjectDeep learning-
dc.titleA Convolutional Neural Network for Point Cloud Instance Segmentation in Cluttered Scene Trained by Synthetic Data without Color-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1109/ACCESS.2020.2978506-
dc.identifier.scopuseid_2-s2.0-85083899191-
dc.identifier.volume8-
dc.identifier.spage70262-
dc.identifier.epage70269-
dc.identifier.eissn2169-3536-
dc.identifier.isiWOS:000549829900015-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats