File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/LRA.2020.3005121
- Scopus: eid_2-s2.0-85088422946
- WOS: WOS:000546883300005
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Edge Enhanced Implicit Orientation Learning With Geometric Prior for 6D Pose Estimation
Title | Edge Enhanced Implicit Orientation Learning With Geometric Prior for 6D Pose Estimation |
---|---|
Authors | |
Keywords | Deep learning for visual perception representation learning |
Issue Date | 2020 |
Publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE |
Citation | IEEE Robotics and Automation Letters, 2020, v. 5 n. 3, p. 4931-4938 How to Cite? |
Abstract | Estimating 6D poses of rigid objects from RGB images is an important but challenging task. This is especially true for textureless objects with strong symmetry, since they have only sparse visual features to be leveraged for the task and their symmetry leads to pose ambiguity. The implicit encoding of orientations learned by autoencoders [31], [32] has demonstrated its effectiveness in handling such objects without requiring explicit pose labeling. In this letter, we further improve this methodology with two key technical contributions. First, we use edge cues to complement the color images with more discriminative features and reduce the domain gap between the real images for testing and the synthetic ones for training. Second, we enhance the regularity of the implicitly learned pose representations by a self-supervision scheme to enforce the geometric prior that the latent representations of two images presenting nearby rotations should be close too. Our approach achieves the state-of-the-art performance on the T-LESS benchmark in the RGB domain; its evaluation on the LINEMOD dataset also outperforms other synthetically trained approaches. Extensive ablation tests demonstrate the improvements enabled by our technical designs. Our code is publicly available for research use. ** The code is available at https://github.com/fylwen/EEGP-AAE. |
Persistent Identifier | http://hdl.handle.net/10722/294266 |
ISSN | 2023 Impact Factor: 4.6 2023 SCImago Journal Rankings: 2.119 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | WEN, Y | - |
dc.contributor.author | Pan, H | - |
dc.contributor.author | Yang, L | - |
dc.contributor.author | Wang, W | - |
dc.date.accessioned | 2020-11-23T08:28:54Z | - |
dc.date.available | 2020-11-23T08:28:54Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | IEEE Robotics and Automation Letters, 2020, v. 5 n. 3, p. 4931-4938 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | http://hdl.handle.net/10722/294266 | - |
dc.description.abstract | Estimating 6D poses of rigid objects from RGB images is an important but challenging task. This is especially true for textureless objects with strong symmetry, since they have only sparse visual features to be leveraged for the task and their symmetry leads to pose ambiguity. The implicit encoding of orientations learned by autoencoders [31], [32] has demonstrated its effectiveness in handling such objects without requiring explicit pose labeling. In this letter, we further improve this methodology with two key technical contributions. First, we use edge cues to complement the color images with more discriminative features and reduce the domain gap between the real images for testing and the synthetic ones for training. Second, we enhance the regularity of the implicitly learned pose representations by a self-supervision scheme to enforce the geometric prior that the latent representations of two images presenting nearby rotations should be close too. Our approach achieves the state-of-the-art performance on the T-LESS benchmark in the RGB domain; its evaluation on the LINEMOD dataset also outperforms other synthetically trained approaches. Extensive ablation tests demonstrate the improvements enabled by our technical designs. Our code is publicly available for research use. ** The code is available at https://github.com/fylwen/EEGP-AAE. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE | - |
dc.relation.ispartof | IEEE Robotics and Automation Letters | - |
dc.rights | IEEE Robotics and Automation Letters. Copyright © Institute of Electrical and Electronics Engineers. | - |
dc.rights | ©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | Deep learning for visual perception | - |
dc.subject | representation learning | - |
dc.title | Edge Enhanced Implicit Orientation Learning With Geometric Prior for 6D Pose Estimation | - |
dc.type | Article | - |
dc.identifier.email | Yang, L: lyang125@hku.hk | - |
dc.identifier.email | Wang, W: wenping@cs.hku.hk | - |
dc.identifier.authority | Wang, W=rp00186 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/LRA.2020.3005121 | - |
dc.identifier.scopus | eid_2-s2.0-85088422946 | - |
dc.identifier.hkuros | 318918 | - |
dc.identifier.volume | 5 | - |
dc.identifier.issue | 3 | - |
dc.identifier.spage | 4931 | - |
dc.identifier.epage | 4938 | - |
dc.identifier.isi | WOS:000546883300005 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 2377-3766 | - |