File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Cross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation Network

TitleCross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation Network
Authors
KeywordsAdaptation models
Co-training
Domain Adaptation
Feature extraction
Image recognition
Point Cloud
Target recognition
Task analysis
Three-dimensional displays
Training
Issue Date2021
Citation
IEEE Transactions on Image Processing, 2021, v. 30, p. 7364-7377 How to Cite?
AbstractIn this work, we propose a novel two-view domain adaptation network named Deep-Shallow Domain Adaptation Network (DSDAN) for 3D point cloud recognition. Different from the traditional 2D image recognition task, the valuable texture information is often absent in point cloud data, making point cloud recognition a challenging task, especially in the cross-dataset scenario where the training and test data exhibit a considerable distribution mismatch. In our DSDAN method, we tackle the challenging cross-dataset 3D point cloud recognition task from two aspects. On one hand, we propose a two-view learning framework, such that we can effectively leverage multiple feature representations to improve the recognition performance. To this end, we propose a simple and efficient Bag-of-Points feature method, as a complementary view to the deep representation. Moreover, we also propose a cross view consistency loss to boost the two-view learning framework. On the other hand, we further propose a two-level adaptation strategy to effectively address the domain distribution mismatch issue. Specifically, we apply a feature-level distribution alignment module for each view, and also propose an instance-level adaptation approach to select highly confident pseudo-labeled target samples for adapting the model to the target domain, based on which a co-training scheme is used to integrate the learning and adaptation process on the two views. Extensive experiments on the benchmark dataset show that our newly proposed DSDAN method outperforms the existing state-of-the-art methods for the cross-dataset point cloud recognition task.
Persistent Identifierhttp://hdl.handle.net/10722/322058
ISSN
2023 Impact Factor: 10.8
2023 SCImago Journal Rankings: 3.556
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Feiyu-
dc.contributor.authorLi, Wen-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:23:19Z-
dc.date.available2022-11-03T02:23:19Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Image Processing, 2021, v. 30, p. 7364-7377-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/322058-
dc.description.abstractIn this work, we propose a novel two-view domain adaptation network named Deep-Shallow Domain Adaptation Network (DSDAN) for 3D point cloud recognition. Different from the traditional 2D image recognition task, the valuable texture information is often absent in point cloud data, making point cloud recognition a challenging task, especially in the cross-dataset scenario where the training and test data exhibit a considerable distribution mismatch. In our DSDAN method, we tackle the challenging cross-dataset 3D point cloud recognition task from two aspects. On one hand, we propose a two-view learning framework, such that we can effectively leverage multiple feature representations to improve the recognition performance. To this end, we propose a simple and efficient Bag-of-Points feature method, as a complementary view to the deep representation. Moreover, we also propose a cross view consistency loss to boost the two-view learning framework. On the other hand, we further propose a two-level adaptation strategy to effectively address the domain distribution mismatch issue. Specifically, we apply a feature-level distribution alignment module for each view, and also propose an instance-level adaptation approach to select highly confident pseudo-labeled target samples for adapting the model to the target domain, based on which a co-training scheme is used to integrate the learning and adaptation process on the two views. Extensive experiments on the benchmark dataset show that our newly proposed DSDAN method outperforms the existing state-of-the-art methods for the cross-dataset point cloud recognition task.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectAdaptation models-
dc.subjectCo-training-
dc.subjectDomain Adaptation-
dc.subjectFeature extraction-
dc.subjectImage recognition-
dc.subjectPoint Cloud-
dc.subjectTarget recognition-
dc.subjectTask analysis-
dc.subjectThree-dimensional displays-
dc.subjectTraining-
dc.titleCross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation Network-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2021.3092818-
dc.identifier.pmid34255628-
dc.identifier.scopuseid_2-s2.0-85110893524-
dc.identifier.volume30-
dc.identifier.spage7364-
dc.identifier.epage7377-
dc.identifier.eissn1941-0042-
dc.identifier.isiWOS:000690439600001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats