File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCV48922.2021.01089
- Scopus: eid_2-s2.0-85126864031
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates
Title | Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates |
---|---|
Authors | |
Issue Date | 2021 |
Citation | Proceedings of the IEEE International Conference on Computer Vision, 2021, p. 11057-11066 How to Cite? |
Abstract | Co-speech gesture generation is to synthesize a gesture sequence that not only looks real but also matches with the input speech audio. Our method generates the movements of a complete upper body, including arms, hands, and the head. Although recent data-driven methods achieve great success, challenges still exist, such as limited variety, poor fidelity, and lack of objective metrics. Motivated by the fact that the speech cannot fully determine the gesture, we design a method that learns a set of gesture template vectors to model the latent conditions, which relieve the ambiguity. For our method, the template vector determines the general appearance of a generated gesture sequence, while the speech audio drives subtle movements of the body, both indispensable for synthesizing a realistic gesture sequence. Due to the intractability of an objective metric for gesture-speech synchronization, we adopt the lip-sync error as a proxy metric to tune and evaluate the synchronization ability of our model. Extensive experiments show the superiority of our method in both objective and subjective evaluations on fidelity and synchronization. |
Persistent Identifier | http://hdl.handle.net/10722/345175 |
ISSN | 2023 SCImago Journal Rankings: 12.263 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Qian, Shenhan | - |
dc.contributor.author | Tu, Zhi | - |
dc.contributor.author | Zhi, Yihao | - |
dc.contributor.author | Liu, Wen | - |
dc.contributor.author | Gao, Shenghua | - |
dc.date.accessioned | 2024-08-15T09:25:42Z | - |
dc.date.available | 2024-08-15T09:25:42Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2021, p. 11057-11066 | - |
dc.identifier.issn | 1550-5499 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345175 | - |
dc.description.abstract | Co-speech gesture generation is to synthesize a gesture sequence that not only looks real but also matches with the input speech audio. Our method generates the movements of a complete upper body, including arms, hands, and the head. Although recent data-driven methods achieve great success, challenges still exist, such as limited variety, poor fidelity, and lack of objective metrics. Motivated by the fact that the speech cannot fully determine the gesture, we design a method that learns a set of gesture template vectors to model the latent conditions, which relieve the ambiguity. For our method, the template vector determines the general appearance of a generated gesture sequence, while the speech audio drives subtle movements of the body, both indispensable for synthesizing a realistic gesture sequence. Due to the intractability of an objective metric for gesture-speech synchronization, we adopt the lip-sync error as a proxy metric to tune and evaluate the synchronization ability of our model. Extensive experiments show the superiority of our method in both objective and subjective evaluations on fidelity and synchronization. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | - |
dc.title | Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCV48922.2021.01089 | - |
dc.identifier.scopus | eid_2-s2.0-85126864031 | - |
dc.identifier.spage | 11057 | - |
dc.identifier.epage | 11066 | - |