File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A recurrent variational autoencoder for human motion synthesis

TitleA recurrent variational autoencoder for human motion synthesis
Authors
Issue Date2017
Citation
28th British Machine Vision Conference (BMVC 2017), London, 4-7 September 2017. In Proceedings of the British Machine Vision Conference (BMVC), 2017, p. 119.1-119.12 How to Cite?
AbstractWe propose a novel generative model of human motion that can be trained using a large motion capture dataset, and allows users to produce animations from high-level control signals. As previous architectures struggle to predict motions far into the future due to the inherent ambiguity, we argue that a user-provided control signal is desirable for animators and greatly reduces the predictive error for long sequences. Thus, we formulate a framework which explicitly introduces an encoding of control signals into a variational inference framework trained to learn the manifold of human motion. As part of this framework, we formulate a prior on the latent space, which allows us to generate high-quality motion without providing frames from an existing sequence. We further model the sequential nature of the task by combining samples from a variational approximation to the intractable posterior with the control signal through a recurrent neural network (RNN) that synthesizes the motion. We show that our system can predict the movements of the human body over long horizons more accurately than state-of-the-art methods. Finally, the design of our system considers practical use cases and thus provides a competitive approach to motion synthesis.
Persistent Identifierhttp://hdl.handle.net/10722/288827

 

DC FieldValueLanguage
dc.contributor.authorHabibie, Ikhsanul-
dc.contributor.authorHolden, Daniel-
dc.contributor.authorSchwarz, Jonathan-
dc.contributor.authorYearsley, Joe-
dc.contributor.authorKomura, Taku-
dc.date.accessioned2020-10-12T08:05:59Z-
dc.date.available2020-10-12T08:05:59Z-
dc.date.issued2017-
dc.identifier.citation28th British Machine Vision Conference (BMVC 2017), London, 4-7 September 2017. In Proceedings of the British Machine Vision Conference (BMVC), 2017, p. 119.1-119.12-
dc.identifier.urihttp://hdl.handle.net/10722/288827-
dc.description.abstractWe propose a novel generative model of human motion that can be trained using a large motion capture dataset, and allows users to produce animations from high-level control signals. As previous architectures struggle to predict motions far into the future due to the inherent ambiguity, we argue that a user-provided control signal is desirable for animators and greatly reduces the predictive error for long sequences. Thus, we formulate a framework which explicitly introduces an encoding of control signals into a variational inference framework trained to learn the manifold of human motion. As part of this framework, we formulate a prior on the latent space, which allows us to generate high-quality motion without providing frames from an existing sequence. We further model the sequential nature of the task by combining samples from a variational approximation to the intractable posterior with the control signal through a recurrent neural network (RNN) that synthesizes the motion. We show that our system can predict the movements of the human body over long horizons more accurately than state-of-the-art methods. Finally, the design of our system considers practical use cases and thus provides a competitive approach to motion synthesis.-
dc.languageeng-
dc.relation.ispartofProceedings of the British Machine Vision Conference (BMVC)-
dc.rights© 2017. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.-
dc.titleA recurrent variational autoencoder for human motion synthesis-
dc.typeConference_Paper-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.5244/c.31.119-
dc.identifier.scopuseid_2-s2.0-85088774383-
dc.identifier.spage119.1-
dc.identifier.epage119.12-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats