File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1002/cav.1546
- Scopus: eid_2-s2.0-84908543309
- WOS: WOS:000343871000003
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Natural preparation behavior synthesis
Title | Natural preparation behavior synthesis |
---|---|
Authors | |
Keywords | Motion synthesis Motion blending Preparation behavior Reinforcement learning Posture optimization |
Issue Date | 2014 |
Citation | Computer Animation and Virtual Worlds, 2014, v. 25, n. 5-6, p. 531-542 How to Cite? |
Abstract | Copyright © 2013 John Wiley & Sons, Ltd. Humans adjust their movements in advance to prepare for the forthcoming action, resulting in efficient and smooth transitions. However, traditional computer animation approaches such as motion graphs simply concatenate a series of actions without taking into account the following one. In this paper, we propose a new method to produce preparation behaviors using reinforcement learning. As an offline process, the system learns the optimal way to approach a target and to prepare for interaction. A scalar value called the level of preparation is introduced, which represents the degree of transition from the initial action to the interacting action. To synthesize the movements of preparation, we propose a customized motion blending scheme based on the level of preparation, which is followed by an optimization framework that adjusts the posture to keep the balance. During runtime, the trained controller drives the character to move to a target with the appropriate level of preparation, resulting in a humanlike behavior. We create scenes in which the character has to move in a complex environment and to interact with objects, such as crawling under and jumping over obstacles while walking. The method is useful not only for computer animation but also for real-time applications such as computer games, in which the characters need to accomplish a series of tasks in a given environment. |
Persistent Identifier | http://hdl.handle.net/10722/288637 |
ISSN | 2023 Impact Factor: 0.9 2023 SCImago Journal Rankings: 0.403 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shum, Hubert P.H. | - |
dc.contributor.author | Hoyet, Ludovic | - |
dc.contributor.author | Ho, Edmond S.L. | - |
dc.contributor.author | Komura, Taku | - |
dc.contributor.author | Multon, Franck | - |
dc.date.accessioned | 2020-10-12T08:05:28Z | - |
dc.date.available | 2020-10-12T08:05:28Z | - |
dc.date.issued | 2014 | - |
dc.identifier.citation | Computer Animation and Virtual Worlds, 2014, v. 25, n. 5-6, p. 531-542 | - |
dc.identifier.issn | 1546-4261 | - |
dc.identifier.uri | http://hdl.handle.net/10722/288637 | - |
dc.description.abstract | Copyright © 2013 John Wiley & Sons, Ltd. Humans adjust their movements in advance to prepare for the forthcoming action, resulting in efficient and smooth transitions. However, traditional computer animation approaches such as motion graphs simply concatenate a series of actions without taking into account the following one. In this paper, we propose a new method to produce preparation behaviors using reinforcement learning. As an offline process, the system learns the optimal way to approach a target and to prepare for interaction. A scalar value called the level of preparation is introduced, which represents the degree of transition from the initial action to the interacting action. To synthesize the movements of preparation, we propose a customized motion blending scheme based on the level of preparation, which is followed by an optimization framework that adjusts the posture to keep the balance. During runtime, the trained controller drives the character to move to a target with the appropriate level of preparation, resulting in a humanlike behavior. We create scenes in which the character has to move in a complex environment and to interact with objects, such as crawling under and jumping over obstacles while walking. The method is useful not only for computer animation but also for real-time applications such as computer games, in which the characters need to accomplish a series of tasks in a given environment. | - |
dc.language | eng | - |
dc.relation.ispartof | Computer Animation and Virtual Worlds | - |
dc.subject | Motion synthesis | - |
dc.subject | Motion blending | - |
dc.subject | Preparation behavior | - |
dc.subject | Reinforcement learning | - |
dc.subject | Posture optimization | - |
dc.title | Natural preparation behavior synthesis | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1002/cav.1546 | - |
dc.identifier.scopus | eid_2-s2.0-84908543309 | - |
dc.identifier.volume | 25 | - |
dc.identifier.issue | 5-6 | - |
dc.identifier.spage | 531 | - |
dc.identifier.epage | 542 | - |
dc.identifier.eissn | 1546-427X | - |
dc.identifier.isi | WOS:000343871000003 | - |
dc.identifier.issnl | 1546-4261 | - |