File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/CVPR.2018.00295
- Scopus: eid_2-s2.0-85055083474
- WOS: WOS:000457843602095
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: PU-Net: Point Cloud Upsampling Network
Title | PU-Net: Point Cloud Upsampling Network |
---|---|
Authors | |
Issue Date | 2018 |
Citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, p. 2790-2799 How to Cite? |
Abstract | Learning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity of the data. In this paper, we present a data-driven point cloud upsampling technique. The key idea is to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space. The expanded feature is then split to a multitude of features, which are then reconstructed to an upsampled point set. Our network is applied at a patch-level, with a joint loss function that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some baseline methods and an optimization-based method. Results show that our upsampled points have better uniformity and are located closer to the underlying surfaces. |
Persistent Identifier | http://hdl.handle.net/10722/299580 |
ISSN | 2023 SCImago Journal Rankings: 10.331 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yu, Lequan | - |
dc.contributor.author | Li, Xianzhi | - |
dc.contributor.author | Fu, Chi Wing | - |
dc.contributor.author | Cohen-Or, Daniel | - |
dc.contributor.author | Heng, Pheng Ann | - |
dc.date.accessioned | 2021-05-21T03:34:43Z | - |
dc.date.available | 2021-05-21T03:34:43Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, p. 2790-2799 | - |
dc.identifier.issn | 1063-6919 | - |
dc.identifier.uri | http://hdl.handle.net/10722/299580 | - |
dc.description.abstract | Learning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity of the data. In this paper, we present a data-driven point cloud upsampling technique. The key idea is to learn multi-level features per point and expand the point set via a multi-branch convolution unit implicitly in feature space. The expanded feature is then split to a multitude of features, which are then reconstructed to an upsampled point set. Our network is applied at a patch-level, with a joint loss function that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some baseline methods and an optimization-based method. Results show that our upsampled points have better uniformity and are located closer to the underlying surfaces. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | - |
dc.title | PU-Net: Point Cloud Upsampling Network | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/CVPR.2018.00295 | - |
dc.identifier.scopus | eid_2-s2.0-85055083474 | - |
dc.identifier.spage | 2790 | - |
dc.identifier.epage | 2799 | - |
dc.identifier.isi | WOS:000457843602095 | - |