File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3450626.3459830
- Scopus: eid_2-s2.0-85111272754
- WOS: WOS:000674930900086
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: ManipNet: neural manipulation synthesis with a hand-object spatial representation
Title | ManipNet: neural manipulation synthesis with a hand-object spatial representation |
---|---|
Authors | |
Issue Date | 2021 |
Publisher | Association for Computing Machinery, Inc. The Journal's web site is located at http://tog.acm.org |
Citation | ACM Transactions on Graphics, 2021, v. 40 n. 4, p. article no. 121 How to Cite? |
Abstract | Natural hand manipulations exhibit complex finger maneuvers adaptive to object shapes and the tasks at hand. Learning dexterous manipulation from data in a brute force way would require a prohibitive amount of examples to effectively cover the combinatorial space of 3D shapes and activities. In this paper, we propose a hand-object spatial representation that can achieve generalization from limited data. Our representation combines the global object shape as voxel occupancies with local geometric details as samples of closest distances. This representation is used by a neural network to regress finger motions from input trajectories of wrists and objects. Specifically, we provide the network with the current finger pose, past and future trajectories, and the spatial representations extracted from these trajectories. The network then predicts a new finger pose for the next frame as an autoregressive model. With a carefully chosen hand-centric coordinate system, we can handle single-handed and two-handed motions in a unified framework. Learning from a small number of primitive shapes and kitchenware objects, the network is able to synthesize a variety of finger gaits for grasping, in-hand manipulation, and bimanual object handling on a rich set of novel shapes and functional tasks. We also demonstrate a live demo of manipulating virtual objects in real-time using a simple physical prop. Our system is useful for offline animation or real-time applications forgiving to a small delay. |
Persistent Identifier | http://hdl.handle.net/10722/304080 |
ISSN | 2023 Impact Factor: 7.8 2023 SCImago Journal Rankings: 7.766 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, H | - |
dc.contributor.author | Ye, Y | - |
dc.contributor.author | Shiratori, T | - |
dc.contributor.author | Komura, T | - |
dc.date.accessioned | 2021-09-23T08:54:57Z | - |
dc.date.available | 2021-09-23T08:54:57Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | ACM Transactions on Graphics, 2021, v. 40 n. 4, p. article no. 121 | - |
dc.identifier.issn | 0730-0301 | - |
dc.identifier.uri | http://hdl.handle.net/10722/304080 | - |
dc.description.abstract | Natural hand manipulations exhibit complex finger maneuvers adaptive to object shapes and the tasks at hand. Learning dexterous manipulation from data in a brute force way would require a prohibitive amount of examples to effectively cover the combinatorial space of 3D shapes and activities. In this paper, we propose a hand-object spatial representation that can achieve generalization from limited data. Our representation combines the global object shape as voxel occupancies with local geometric details as samples of closest distances. This representation is used by a neural network to regress finger motions from input trajectories of wrists and objects. Specifically, we provide the network with the current finger pose, past and future trajectories, and the spatial representations extracted from these trajectories. The network then predicts a new finger pose for the next frame as an autoregressive model. With a carefully chosen hand-centric coordinate system, we can handle single-handed and two-handed motions in a unified framework. Learning from a small number of primitive shapes and kitchenware objects, the network is able to synthesize a variety of finger gaits for grasping, in-hand manipulation, and bimanual object handling on a rich set of novel shapes and functional tasks. We also demonstrate a live demo of manipulating virtual objects in real-time using a simple physical prop. Our system is useful for offline animation or real-time applications forgiving to a small delay. | - |
dc.language | eng | - |
dc.publisher | Association for Computing Machinery, Inc. The Journal's web site is located at http://tog.acm.org | - |
dc.relation.ispartof | ACM Transactions on Graphics | - |
dc.rights | ACM Transactions on Graphics. Copyright © Association for Computing Machinery, Inc. | - |
dc.rights | ©ACM, YYYY. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn | - |
dc.title | ManipNet: neural manipulation synthesis with a hand-object spatial representation | - |
dc.type | Article | - |
dc.identifier.email | Komura, T: taku@cs.hku.hk | - |
dc.identifier.authority | Komura, T=rp02741 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3450626.3459830 | - |
dc.identifier.scopus | eid_2-s2.0-85111272754 | - |
dc.identifier.hkuros | 325507 | - |
dc.identifier.volume | 40 | - |
dc.identifier.issue | 4 | - |
dc.identifier.spage | article no. 121 | - |
dc.identifier.epage | article no. 121 | - |
dc.identifier.isi | WOS:000674930900086 | - |
dc.publisher.place | United States | - |