File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A deep learning framework for character motion synthesis and editing

TitleA deep learning framework for character motion synthesis and editing
Authors
KeywordsHuman motion
Convolutional neural networks
Deep learning
Manifold learning
Autoencoder
Character animation
Issue Date2016
Citation
ACM Transactions on Graphics, 2016, v. 35, n. 4, article no. 138 How to Cite?
AbstractWe present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parametersto the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimizationin the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.
Persistent Identifierhttp://hdl.handle.net/10722/289058
ISSN
2022 Impact Factor: 6.2
2020 SCImago Journal Rankings: 2.153
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHolden, Daniel-
dc.contributor.authorSaito, Jun-
dc.contributor.authorKomura, Taku-
dc.date.accessioned2020-10-12T08:06:34Z-
dc.date.available2020-10-12T08:06:34Z-
dc.date.issued2016-
dc.identifier.citationACM Transactions on Graphics, 2016, v. 35, n. 4, article no. 138-
dc.identifier.issn0730-0301-
dc.identifier.urihttp://hdl.handle.net/10722/289058-
dc.description.abstractWe present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parametersto the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimizationin the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.-
dc.languageeng-
dc.relation.ispartofACM Transactions on Graphics-
dc.subjectHuman motion-
dc.subjectConvolutional neural networks-
dc.subjectDeep learning-
dc.subjectManifold learning-
dc.subjectAutoencoder-
dc.subjectCharacter animation-
dc.titleA deep learning framework for character motion synthesis and editing-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/2897824.2925975-
dc.identifier.scopuseid_2-s2.0-84980028529-
dc.identifier.volume35-
dc.identifier.issue4-
dc.identifier.spagearticle no. 138-
dc.identifier.epagearticle no. 138-
dc.identifier.eissn1557-7368-
dc.identifier.isiWOS:000380112400108-
dc.identifier.issnl0730-0301-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats