File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates

TitleSpeech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates
Authors
Issue Date2021
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2021, p. 11057-11066 How to Cite?
AbstractCo-speech gesture generation is to synthesize a gesture sequence that not only looks real but also matches with the input speech audio. Our method generates the movements of a complete upper body, including arms, hands, and the head. Although recent data-driven methods achieve great success, challenges still exist, such as limited variety, poor fidelity, and lack of objective metrics. Motivated by the fact that the speech cannot fully determine the gesture, we design a method that learns a set of gesture template vectors to model the latent conditions, which relieve the ambiguity. For our method, the template vector determines the general appearance of a generated gesture sequence, while the speech audio drives subtle movements of the body, both indispensable for synthesizing a realistic gesture sequence. Due to the intractability of an objective metric for gesture-speech synchronization, we adopt the lip-sync error as a proxy metric to tune and evaluate the synchronization ability of our model. Extensive experiments show the superiority of our method in both objective and subjective evaluations on fidelity and synchronization.
Persistent Identifierhttp://hdl.handle.net/10722/345175
ISSN
2023 SCImago Journal Rankings: 12.263

 

DC FieldValueLanguage
dc.contributor.authorQian, Shenhan-
dc.contributor.authorTu, Zhi-
dc.contributor.authorZhi, Yihao-
dc.contributor.authorLiu, Wen-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:25:42Z-
dc.date.available2024-08-15T09:25:42Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2021, p. 11057-11066-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/345175-
dc.description.abstractCo-speech gesture generation is to synthesize a gesture sequence that not only looks real but also matches with the input speech audio. Our method generates the movements of a complete upper body, including arms, hands, and the head. Although recent data-driven methods achieve great success, challenges still exist, such as limited variety, poor fidelity, and lack of objective metrics. Motivated by the fact that the speech cannot fully determine the gesture, we design a method that learns a set of gesture template vectors to model the latent conditions, which relieve the ambiguity. For our method, the template vector determines the general appearance of a generated gesture sequence, while the speech audio drives subtle movements of the body, both indispensable for synthesizing a realistic gesture sequence. Due to the intractability of an objective metric for gesture-speech synchronization, we adopt the lip-sync error as a proxy metric to tune and evaluate the synchronization ability of our model. Extensive experiments show the superiority of our method in both objective and subjective evaluations on fidelity and synchronization.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleSpeech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV48922.2021.01089-
dc.identifier.scopuseid_2-s2.0-85126864031-
dc.identifier.spage11057-
dc.identifier.epage11066-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats