File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: VPU: A Video-Based Point Cloud Upsampling Framework

TitleVPU: A Video-Based Point Cloud Upsampling Framework
Authors
KeywordsPoint cloud sequence
point cloud upsampling
spatial-temporal aggregation
Issue Date2022
Citation
IEEE Transactions on Image Processing, 2022, v. 31, p. 4062-4075 How to Cite?
AbstractIn this work, we propose a new patch-based framework called VPU for the video-based point cloud upsampling task by effectively exploiting temporal dependency among multiple consecutive point cloud frames, in which each frame consists of a set of unordered, sparse and irregular 3D points. Rather than adopting the sophisticated motion estimation strategy in video analysis, we propose a new spatio-temporal aggregation (STA) module to effectively extract, align and aggregate rich local geometric clues from consecutive frames at the feature level. By more reliably summarizing spatio-temporally consistent and complementary knowledge from multiple frames in the resultant local structural features, our method better infers the local geometry distributions at the current frame. In addition, our STA module can be readily incorporated with various existing single frame-based point upsampling methods (e.g., PU-Net, MPU, PU-GAN and PU-GCN). Comprehensive experiments on multiple point cloud sequence datasets demonstrate our video-based point cloud upsampling framework achieves substantial performance improvement over its single frame-based counterparts.
Persistent Identifierhttp://hdl.handle.net/10722/321986
ISSN
2021 Impact Factor: 11.041
2020 SCImago Journal Rankings: 1.778
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Kaisiyuan-
dc.contributor.authorSheng, Lu-
dc.contributor.authorGu, Shuhang-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:22:49Z-
dc.date.available2022-11-03T02:22:49Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Image Processing, 2022, v. 31, p. 4062-4075-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/321986-
dc.description.abstractIn this work, we propose a new patch-based framework called VPU for the video-based point cloud upsampling task by effectively exploiting temporal dependency among multiple consecutive point cloud frames, in which each frame consists of a set of unordered, sparse and irregular 3D points. Rather than adopting the sophisticated motion estimation strategy in video analysis, we propose a new spatio-temporal aggregation (STA) module to effectively extract, align and aggregate rich local geometric clues from consecutive frames at the feature level. By more reliably summarizing spatio-temporally consistent and complementary knowledge from multiple frames in the resultant local structural features, our method better infers the local geometry distributions at the current frame. In addition, our STA module can be readily incorporated with various existing single frame-based point upsampling methods (e.g., PU-Net, MPU, PU-GAN and PU-GCN). Comprehensive experiments on multiple point cloud sequence datasets demonstrate our video-based point cloud upsampling framework achieves substantial performance improvement over its single frame-based counterparts.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectPoint cloud sequence-
dc.subjectpoint cloud upsampling-
dc.subjectspatial-temporal aggregation-
dc.titleVPU: A Video-Based Point Cloud Upsampling Framework-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2022.3166627-
dc.identifier.pmid35436193-
dc.identifier.scopuseid_2-s2.0-85128657816-
dc.identifier.volume31-
dc.identifier.spage4062-
dc.identifier.epage4075-
dc.identifier.eissn1941-0042-
dc.identifier.isiWOS:000812528700001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats