File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCV51070.2023.01902
- Scopus: eid_2-s2.0-85179234941
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation
Title | LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation |
---|---|
Authors | |
Issue Date | 2023 |
Citation | Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 20750-20760 How to Cite? |
Abstract | Gestures are non-verbal but important behaviors accompanying people's speech. While previous methods are able to generate speech rhythm-synchronized gestures, the semantic context of the speech is generally lacking in the gesticulations. Although semantic gestures do not occur very regularly in human speech, they are indeed the key for the audience to understand the speech context in a more immersive environment. Hence, we introduce LivelySpeaker, a framework that realizes semantics-aware co-speech gesture generation and offers several control handles. In particular, our method decouples the task into two stages: script-based gesture generation and audio-guided rhythm refinement. Specifically, the script-based gesture generation leverages the pre-trained CLIP text embeddings as the guidance for generating gestures that are highly semantically aligned with the script. Then, we devise a simple but effective diffusion-based gesture generation backbone simply using pure MLPs, that is conditioned on only audio signals and learns to gesticulate with realistic motions. We utilize such powerful prior to rhyme the script-guided gestures with the audio signals, notably in a zero-shot setting. Our novel two-stage generation framework also enables several applications, such as changing the gesticulation style, editing the co-speech gestures via textual prompting, and controlling the semantic awareness and rhythm alignment with guided diffusion. Extensive experiments demonstrate the advantages of the proposed framework over competing methods. In addition, our core diffusion-based generative model also achieves state-of-the-art performance on two benchmarks. The code and model will be released to facilitate future research. |
Persistent Identifier | http://hdl.handle.net/10722/345368 |
ISSN | 2023 SCImago Journal Rankings: 12.263 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhi, Yihao | - |
dc.contributor.author | Cun, Xiaodong | - |
dc.contributor.author | Chen, Xuelin | - |
dc.contributor.author | Shen, Xi | - |
dc.contributor.author | Guo, Wen | - |
dc.contributor.author | Huang, Shaoli | - |
dc.contributor.author | Gao, Shenghua | - |
dc.date.accessioned | 2024-08-15T09:26:55Z | - |
dc.date.available | 2024-08-15T09:26:55Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2023, p. 20750-20760 | - |
dc.identifier.issn | 1550-5499 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345368 | - |
dc.description.abstract | Gestures are non-verbal but important behaviors accompanying people's speech. While previous methods are able to generate speech rhythm-synchronized gestures, the semantic context of the speech is generally lacking in the gesticulations. Although semantic gestures do not occur very regularly in human speech, they are indeed the key for the audience to understand the speech context in a more immersive environment. Hence, we introduce LivelySpeaker, a framework that realizes semantics-aware co-speech gesture generation and offers several control handles. In particular, our method decouples the task into two stages: script-based gesture generation and audio-guided rhythm refinement. Specifically, the script-based gesture generation leverages the pre-trained CLIP text embeddings as the guidance for generating gestures that are highly semantically aligned with the script. Then, we devise a simple but effective diffusion-based gesture generation backbone simply using pure MLPs, that is conditioned on only audio signals and learns to gesticulate with realistic motions. We utilize such powerful prior to rhyme the script-guided gestures with the audio signals, notably in a zero-shot setting. Our novel two-stage generation framework also enables several applications, such as changing the gesticulation style, editing the co-speech gestures via textual prompting, and controlling the semantic awareness and rhythm alignment with guided diffusion. Extensive experiments demonstrate the advantages of the proposed framework over competing methods. In addition, our core diffusion-based generative model also achieves state-of-the-art performance on two benchmarks. The code and model will be released to facilitate future research. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | - |
dc.title | LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCV51070.2023.01902 | - |
dc.identifier.scopus | eid_2-s2.0-85179234941 | - |
dc.identifier.spage | 20750 | - |
dc.identifier.epage | 20760 | - |