File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCC62479.2024.10681880
- Scopus: eid_2-s2.0-85205332605
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Point Cloud Compression with Implicit Neural Representations: A Unified Framework
| Title | Point Cloud Compression with Implicit Neural Representations: A Unified Framework |
|---|---|
| Authors | |
| Keywords | implicit neural representation neural network compression Point cloud compression |
| Issue Date | 2024 |
| Citation | 2024 IEEE Cic International Conference on Communications in China Iccc 2024, 2024, p. 1709-1714 How to Cite? |
| Abstract | Point clouds have become increasingly vital across various applications thanks to their ability to realistically depict 3D objects and scenes. Nevertheless, effectively compressing unstructured, high-precision point cloud data remains a significant challenge. In this paper, we present a pioneering point cloud compression framework capable of handling both geometry and attribute components. Unlike traditional approaches and existing learning-based methods, our framework utilizes two coordinatebased neural networks to implicitly represent a voxelized point cloud. The first network generates the occupancy status of a voxel, while the second network determines the attributes of an occupied voxel. To tackle an immense number of voxels within the volumetric space, we partition the space into smaller cubes and focus solely on voxels within non-empty cubes. By feeding the coordinates of these voxels into the respective networks, we reconstruct the geometry and attribute components of the original point cloud. The neural network parameters are further quantized and compressed. Experimental results underscore the superior performance of our proposed method compared to the octree-based approach employed in the latest G-PCC standards. Moreover, our method exhibits high universality when contrasted with existing learning-based techniques. |
| Persistent Identifier | http://hdl.handle.net/10722/363769 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ruan, Hongning | - |
| dc.contributor.author | Shao, Yulin | - |
| dc.contributor.author | Yang, Qianqian | - |
| dc.contributor.author | Zhao, Liang | - |
| dc.contributor.author | Niyato, Dusit | - |
| dc.date.accessioned | 2025-10-10T07:49:13Z | - |
| dc.date.available | 2025-10-10T07:49:13Z | - |
| dc.date.issued | 2024 | - |
| dc.identifier.citation | 2024 IEEE Cic International Conference on Communications in China Iccc 2024, 2024, p. 1709-1714 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/363769 | - |
| dc.description.abstract | Point clouds have become increasingly vital across various applications thanks to their ability to realistically depict 3D objects and scenes. Nevertheless, effectively compressing unstructured, high-precision point cloud data remains a significant challenge. In this paper, we present a pioneering point cloud compression framework capable of handling both geometry and attribute components. Unlike traditional approaches and existing learning-based methods, our framework utilizes two coordinatebased neural networks to implicitly represent a voxelized point cloud. The first network generates the occupancy status of a voxel, while the second network determines the attributes of an occupied voxel. To tackle an immense number of voxels within the volumetric space, we partition the space into smaller cubes and focus solely on voxels within non-empty cubes. By feeding the coordinates of these voxels into the respective networks, we reconstruct the geometry and attribute components of the original point cloud. The neural network parameters are further quantized and compressed. Experimental results underscore the superior performance of our proposed method compared to the octree-based approach employed in the latest G-PCC standards. Moreover, our method exhibits high universality when contrasted with existing learning-based techniques. | - |
| dc.language | eng | - |
| dc.relation.ispartof | 2024 IEEE Cic International Conference on Communications in China Iccc 2024 | - |
| dc.subject | implicit neural representation | - |
| dc.subject | neural network compression | - |
| dc.subject | Point cloud compression | - |
| dc.title | Point Cloud Compression with Implicit Neural Representations: A Unified Framework | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1109/ICCC62479.2024.10681880 | - |
| dc.identifier.scopus | eid_2-s2.0-85205332605 | - |
| dc.identifier.spage | 1709 | - |
| dc.identifier.epage | 1714 | - |
