File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Few-shot Learning of Homogeneous Human Locomotion Styles

TitleFew-shot Learning of Homogeneous Human Locomotion Styles
Authors
Keywords•Computing methodologies → Animation
CCS Concepts
Neural networks; Motion capture
Issue Date2018
Citation
Computer Graphics Forum, 2018, v. 37, n. 7, p. 143-153 How to Cite?
Abstract© 2018 The Author(s) Computer Graphics Forum © 2018 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. Using neural networks for learning motion controllers from motion capture data is becoming popular due to the natural and smooth motions they can produce, the wide range of movements they can learn and their compactness once they are trained. Despite these advantages, these systems require large amounts of motion capture data for each new character or style of motion to be generated, and systems have to undergo lengthy retraining, and often reengineering, to get acceptable results. This can make the use of these systems impractical for animators and designers and solving this issue is an open and rather unexplored problem in computer graphics. In this paper we propose a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained. Given a pretrained character controller in the form of a Phase-Functioned Neural Network for locomotion, our system can quickly adapt the locomotion to novel styles using only a short motion clip as an example. We introduce a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style, which both reduces the memory burden at runtime and facilitates learning from smaller quantities of data. We show that our system is suitable for learning stylized motions with few clips of motion data and synthesizing smooth motions in real-time.
Persistent Identifierhttp://hdl.handle.net/10722/288756
ISSN
2023 Impact Factor: 2.7
2023 SCImago Journal Rankings: 1.968
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorMason, I.-
dc.contributor.authorStarke, S.-
dc.contributor.authorZhang, H.-
dc.contributor.authorBilen, H.-
dc.contributor.authorKomura, T.-
dc.date.accessioned2020-10-12T08:05:47Z-
dc.date.available2020-10-12T08:05:47Z-
dc.date.issued2018-
dc.identifier.citationComputer Graphics Forum, 2018, v. 37, n. 7, p. 143-153-
dc.identifier.issn0167-7055-
dc.identifier.urihttp://hdl.handle.net/10722/288756-
dc.description.abstract© 2018 The Author(s) Computer Graphics Forum © 2018 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. Using neural networks for learning motion controllers from motion capture data is becoming popular due to the natural and smooth motions they can produce, the wide range of movements they can learn and their compactness once they are trained. Despite these advantages, these systems require large amounts of motion capture data for each new character or style of motion to be generated, and systems have to undergo lengthy retraining, and often reengineering, to get acceptable results. This can make the use of these systems impractical for animators and designers and solving this issue is an open and rather unexplored problem in computer graphics. In this paper we propose a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained. Given a pretrained character controller in the form of a Phase-Functioned Neural Network for locomotion, our system can quickly adapt the locomotion to novel styles using only a short motion clip as an example. We introduce a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style, which both reduces the memory burden at runtime and facilitates learning from smaller quantities of data. We show that our system is suitable for learning stylized motions with few clips of motion data and synthesizing smooth motions in real-time.-
dc.languageeng-
dc.relation.ispartofComputer Graphics Forum-
dc.subject•Computing methodologies → Animation-
dc.subjectCCS Concepts-
dc.subjectNeural networks; Motion capture-
dc.titleFew-shot Learning of Homogeneous Human Locomotion Styles-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1111/cgf.13555-
dc.identifier.scopuseid_2-s2.0-85055458467-
dc.identifier.volume37-
dc.identifier.issue7-
dc.identifier.spage143-
dc.identifier.epage153-
dc.identifier.eissn1467-8659-
dc.identifier.isiWOS:000448166700014-
dc.identifier.issnl0167-7055-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats