File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Neural animation layering for synthesizing martial arts movements

TitleNeural animation layering for synthesizing martial arts movements
Authors
Keywordscharacter animation
character control
character interactions
deep learning
human motion
Issue Date2021
PublisherAssociation for Computing Machinery, Inc. The Journal's web site is located at http://tog.acm.org
Citation
ACM Transactions on Graphics, 2021, v. 40 n. 4, p. article no. 92 How to Cite?
AbstractInteractively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this paper, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The framework can synthesize novel movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, punching, kicking, avoiding and combinations thereof, but also reconstruct signature motions of different fighters, as well as close-character interactions such as clinching and carrying by learning the spatial joint relationships. To achieve this goal, we adopt a modular framework which is composed of the motion generator and a set of different control modules. The motion generator functions as a motion manifold that projects novel mixed/edited trajectories to natural full-body motions, and synthesizes realistic transitions between different motions. The control modules are task dependent and can be developed and trained separately by engineers to include novel motion tasks, which greatly reduces network iteration time when working with large-scale datasets. Our modular framework provides a transparent control interface for animators that allows modifying or combining movements after network training, and enables iterative adding of control modules for different motion tasks and behaviors. Our system can be used for offline and online motion generation alike, and is relevant for real-time applications such as computer games.
Persistent Identifierhttp://hdl.handle.net/10722/304081
ISSN
2023 Impact Factor: 7.8
2023 SCImago Journal Rankings: 7.766
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorStarke, S-
dc.contributor.authorZhao, Y-
dc.contributor.authorZinno, F-
dc.contributor.authorKomura, T-
dc.date.accessioned2021-09-23T08:54:58Z-
dc.date.available2021-09-23T08:54:58Z-
dc.date.issued2021-
dc.identifier.citationACM Transactions on Graphics, 2021, v. 40 n. 4, p. article no. 92-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10722/304081-
dc.description.abstractInteractively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this paper, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The framework can synthesize novel movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, punching, kicking, avoiding and combinations thereof, but also reconstruct signature motions of different fighters, as well as close-character interactions such as clinching and carrying by learning the spatial joint relationships. To achieve this goal, we adopt a modular framework which is composed of the motion generator and a set of different control modules. The motion generator functions as a motion manifold that projects novel mixed/edited trajectories to natural full-body motions, and synthesizes realistic transitions between different motions. The control modules are task dependent and can be developed and trained separately by engineers to include novel motion tasks, which greatly reduces network iteration time when working with large-scale datasets. Our modular framework provides a transparent control interface for animators that allows modifying or combining movements after network training, and enables iterative adding of control modules for different motion tasks and behaviors. Our system can be used for offline and online motion generation alike, and is relevant for real-time applications such as computer games.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery, Inc. The Journal's web site is located at http://tog.acm.org-
dc.relation.ispartofACM Transactions on Graphics-
dc.rightsACM Transactions on Graphics. Copyright © Association for Computing Machinery, Inc.-
dc.rights©ACM, YYYY. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn-
dc.subjectcharacter animation-
dc.subjectcharacter control-
dc.subjectcharacter interactions-
dc.subjectdeep learning-
dc.subjecthuman motion-
dc.titleNeural animation layering for synthesizing martial arts movements-
dc.typeArticle-
dc.identifier.emailKomura, T: taku@cs.hku.hk-
dc.identifier.authorityKomura, T=rp02741-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3450626.3459881-
dc.identifier.scopuseid_2-s2.0-85111254885-
dc.identifier.hkuros325508-
dc.identifier.volume40-
dc.identifier.issue4-
dc.identifier.spagearticle no. 92-
dc.identifier.epagearticle no. 92-
dc.identifier.isiWOS:000674930900058-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats