File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Localization and Completion for 3D Object Interactions

TitleLocalization and Completion for 3D Object Interactions
Authors
KeywordsADD-image
Scene synthesis
interaction completion
localization
Issue Date2020
Citation
IEEE Transactions on Visualization and Computer Graphics, 2020, v. 26, n. 8, p. 2634-2644 How to Cite?
Abstract© 1995-2012 IEEE. Finding where and what objects to put into an existing scene is a common task for scene synthesis and robot/character motion planning. Existing frameworks require development of hand-crafted features suitable for the task, or full volumetric analysis that could be memory intensive and imprecise. In this paper, we propose a data-driven framework to discover a suitable location and then place the appropriate objects in a scene. Our approach is inspired by computer vision techniques for localizing objects in images: using an all directional depth image (ADD-image) that encodes the 360-degree field of view from samples in the scene, our system regresses the images to the positions where the new object can be located. Given several candidate areas around the host object in the scene, our system predicts the partner object whose geometry fits well to the host object. Our approach is highly parallel and memory efficient, and is especially suitable for handling interactions between large and small objects. We show examples where the system can hang bags on hooks, fit chairs in front of desks, put objects into shelves, insert flowers into vases, and put hangers onto laundry rack.
Persistent Identifierhttp://hdl.handle.net/10722/288930
ISSN
2023 Impact Factor: 4.7
2023 SCImago Journal Rankings: 2.056
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhao, Xi-
dc.contributor.authorHu, Ruizhen-
dc.contributor.authorLiu, Haisong-
dc.contributor.authorKomura, Taku-
dc.contributor.authorYang, Xinyu-
dc.date.accessioned2020-10-12T08:06:14Z-
dc.date.available2020-10-12T08:06:14Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Visualization and Computer Graphics, 2020, v. 26, n. 8, p. 2634-2644-
dc.identifier.issn1077-2626-
dc.identifier.urihttp://hdl.handle.net/10722/288930-
dc.description.abstract© 1995-2012 IEEE. Finding where and what objects to put into an existing scene is a common task for scene synthesis and robot/character motion planning. Existing frameworks require development of hand-crafted features suitable for the task, or full volumetric analysis that could be memory intensive and imprecise. In this paper, we propose a data-driven framework to discover a suitable location and then place the appropriate objects in a scene. Our approach is inspired by computer vision techniques for localizing objects in images: using an all directional depth image (ADD-image) that encodes the 360-degree field of view from samples in the scene, our system regresses the images to the positions where the new object can be located. Given several candidate areas around the host object in the scene, our system predicts the partner object whose geometry fits well to the host object. Our approach is highly parallel and memory efficient, and is especially suitable for handling interactions between large and small objects. We show examples where the system can hang bags on hooks, fit chairs in front of desks, put objects into shelves, insert flowers into vases, and put hangers onto laundry rack.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Visualization and Computer Graphics-
dc.subjectADD-image-
dc.subjectScene synthesis-
dc.subjectinteraction completion-
dc.subjectlocalization-
dc.titleLocalization and Completion for 3D Object Interactions-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TVCG.2019.2892454-
dc.identifier.pmid30640616-
dc.identifier.scopuseid_2-s2.0-85059933334-
dc.identifier.hkuros325513-
dc.identifier.volume26-
dc.identifier.issue8-
dc.identifier.spage2634-
dc.identifier.epage2644-
dc.identifier.eissn1941-0506-
dc.identifier.isiWOS:000546115000007-
dc.identifier.issnl1077-2626-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats