File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud
Title | Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud |
---|---|
Authors | |
Issue Date | 2021 |
Citation | The Eleventh Symposium on Educational Advances in Artificial Intelligence (EAAI-21) in the 35th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence, Virtual Conference, 4-7 February 2021 How to Cite? |
Abstract | In 2D image processing, some attempts decompose images into high and low frequency components for describing edge and smooth parts respectively. Similarly, the contour and flat area of 3D objects, such as the boundary and seat area of a chair, describe different but also complementary geometries. However, such investigation is lost in previous deep networks that understand point clouds by directly treating all points or local patches equally. To solve this problem, we propose Geometry-Disentangled Attention Network (GDANet). GDANet introduces Geometry-Disentangle Module to dynamically disentangle point clouds into the contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components. Then GDANet exploits Sharp-Gentle Complementary Attention Module that regards the features from sharp and gentle variation components as two holistic representations, and pays different attentions to them while fusing them respectively with original point cloud features. In this way, our method captures and refines the holistic and complementary 3D geometric semantics from two distinct disentangled components to supplement the local information. Extensive experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters. |
Description | AAAI Poster Session - Cluster: 3D Computer Vision -AC-D2-R2 - no. AAAI-2650 |
Persistent Identifier | http://hdl.handle.net/10722/306552 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xu, M | - |
dc.contributor.author | Zhang, J | - |
dc.contributor.author | Zhou, Z | - |
dc.contributor.author | Xu, M | - |
dc.contributor.author | Qi, X | - |
dc.contributor.author | Qiao, Y | - |
dc.date.accessioned | 2021-10-22T07:36:16Z | - |
dc.date.available | 2021-10-22T07:36:16Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | The Eleventh Symposium on Educational Advances in Artificial Intelligence (EAAI-21) in the 35th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence, Virtual Conference, 4-7 February 2021 | - |
dc.identifier.uri | http://hdl.handle.net/10722/306552 | - |
dc.description | AAAI Poster Session - Cluster: 3D Computer Vision -AC-D2-R2 - no. AAAI-2650 | - |
dc.description.abstract | In 2D image processing, some attempts decompose images into high and low frequency components for describing edge and smooth parts respectively. Similarly, the contour and flat area of 3D objects, such as the boundary and seat area of a chair, describe different but also complementary geometries. However, such investigation is lost in previous deep networks that understand point clouds by directly treating all points or local patches equally. To solve this problem, we propose Geometry-Disentangled Attention Network (GDANet). GDANet introduces Geometry-Disentangle Module to dynamically disentangle point clouds into the contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components. Then GDANet exploits Sharp-Gentle Complementary Attention Module that regards the features from sharp and gentle variation components as two holistic representations, and pays different attentions to them while fusing them respectively with original point cloud features. In this way, our method captures and refines the holistic and complementary 3D geometric semantics from two distinct disentangled components to supplement the local information. Extensive experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters. | - |
dc.language | eng | - |
dc.relation.ispartof | AAAI Conference on Artificial Intelligence (AAAI-21) - The Eleventh Symposium on Educational Advances in Artificial Intelligence (EAAI-21) | - |
dc.title | Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Qi, X: xjqi@eee.hku.hk | - |
dc.identifier.authority | Qi, X=rp02666 | - |
dc.identifier.hkuros | 328769 | - |