File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3355089.3356505
- Scopus: eid_2-s2.0-85078920792
- WOS: WOS:000498397300058
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Neural state machine for character-scene interactions
Title | Neural state machine for character-scene interactions |
---|---|
Authors | |
Keywords | character animation human motion neural networks locomotion deep learning character control |
Issue Date | 2019 |
Citation | ACM Transactions on Graphics, 2019, v. 38, n. 6, article no. 209 How to Cite? |
Abstract | © 2019 Association for Computing Machinery. We propose Neural State Machine, a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interactions. Even a seemingly simple task such as sitting on a chair is notoriously hard to model with supervised learning. This difficulty is because such a task involves complex planning with periodic and non-periodic motions reacting to the scene geometry to precisely position and orient the character. Our proposed deep auto-regressive framework enables modeling of multi-modal scene interaction behaviors purely from data. Given high-level instructions such as the goal location and the action to be launched there, our system computes a series of movements and transitions to reach the goal in the desired state. To allow characters to adapt to a wide range of geometry such as different shapes of furniture and obstacles, we incorporate an efficient data augmentation scheme to randomly switch the 3D geometry while maintaining the context of the original motion. To increase the precision to reach the goal during runtime, we introduce a control scheme that combines egocentric inference and goal-centric inference. We demonstrate the versatility of our model with various scene interaction tasks such as sitting on a chair, avoiding obstacles, opening and entering through a door, and picking and carrying objects generated in real-time just from a single model. |
Persistent Identifier | http://hdl.handle.net/10722/288788 |
ISSN | 2021 Impact Factor: 7.403 2020 SCImago Journal Rankings: 2.153 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Starke, Sebastian | - |
dc.contributor.author | Zhang, He | - |
dc.contributor.author | Komura, Taku | - |
dc.contributor.author | Saito, Jun | - |
dc.date.accessioned | 2020-10-12T08:05:52Z | - |
dc.date.available | 2020-10-12T08:05:52Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | ACM Transactions on Graphics, 2019, v. 38, n. 6, article no. 209 | - |
dc.identifier.issn | 0730-0301 | - |
dc.identifier.uri | http://hdl.handle.net/10722/288788 | - |
dc.description.abstract | © 2019 Association for Computing Machinery. We propose Neural State Machine, a novel data-driven framework to guide characters to achieve goal-driven actions with precise scene interactions. Even a seemingly simple task such as sitting on a chair is notoriously hard to model with supervised learning. This difficulty is because such a task involves complex planning with periodic and non-periodic motions reacting to the scene geometry to precisely position and orient the character. Our proposed deep auto-regressive framework enables modeling of multi-modal scene interaction behaviors purely from data. Given high-level instructions such as the goal location and the action to be launched there, our system computes a series of movements and transitions to reach the goal in the desired state. To allow characters to adapt to a wide range of geometry such as different shapes of furniture and obstacles, we incorporate an efficient data augmentation scheme to randomly switch the 3D geometry while maintaining the context of the original motion. To increase the precision to reach the goal during runtime, we introduce a control scheme that combines egocentric inference and goal-centric inference. We demonstrate the versatility of our model with various scene interaction tasks such as sitting on a chair, avoiding obstacles, opening and entering through a door, and picking and carrying objects generated in real-time just from a single model. | - |
dc.language | eng | - |
dc.relation.ispartof | ACM Transactions on Graphics | - |
dc.subject | character animation | - |
dc.subject | human motion | - |
dc.subject | neural networks | - |
dc.subject | locomotion | - |
dc.subject | deep learning | - |
dc.subject | character control | - |
dc.title | Neural state machine for character-scene interactions | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3355089.3356505 | - |
dc.identifier.scopus | eid_2-s2.0-85078920792 | - |
dc.identifier.hkuros | 318294 | - |
dc.identifier.volume | 38 | - |
dc.identifier.issue | 6 | - |
dc.identifier.spage | article no. 209 | - |
dc.identifier.epage | article no. 209 | - |
dc.identifier.eissn | 1557-7368 | - |
dc.identifier.isi | WOS:000498397300058 | - |
dc.identifier.issnl | 0730-0301 | - |