SemGes: Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning

Lanmiao Liu1,2,3, Esam Ghaleb1,2, Aslı Özyürek1,2, Zerrin Yumak3

1Max Planck Institute for Psycholinguistics, 2Donders Institute for Brain, Cognition and Behaviour, 3Utrecht University

{lanmiao.liu, esam.ghaleb, asli.ozyurek}@mpi.nl     z.yumak@uu.nl

🏆 Accepted at ICCV 2025

arXiv GitHub
Method Overview
(Hint: The video plays smoothly in Opera and Chrome browsers.)

🔥 Highlights

We introduce a novel framework, SemGes, that first learns a robust VQ-VAE motion prior for body and hand gestures, and then generates gestures driven by fused speech audio, text-based semantics, and speaker identity in a cross-modal transformer.

Our method jointly captures discourse-level context via a semantic coherence loss and fine-grained representational gestures (e.g., iconic, metaphoric) via a semantic relevance loss.

We propose an overlap-and-combine inference algorithm that maintains smooth continuity over extended durations.

Extensive experiments on two benchmarks, namely, the BEAT and TED Expressive datasets show that our method outperforms recent baselines in both objective metrics (e.g., Fréchet Gesture Distance (FGD), diversity, semantic alignment) and user judgment of generated gestures.

📄 Abstract

Creating a virtual avatar with semantically coherent gestures that are aligned with speech is a challenging task. Existing gesture generation research mainly focused on generating rhythmic beat gestures, neglecting the semantic context of the gestures. In this paper, we propose a novel approach for semantic grounding in co-speech gesture generation that integrates semantic information at both fine-grained and global levels. Our approach starts with learning the motion prior through a vector-quantized variational autoencoder. Built on this model, a second-stage module is applied to automatically generate gestures from speech, text-based semantics and speaker identity that ensures consistency between the semantic relevance of generated gestures and co-occurring speech semantics through semantic coherence and relevance modules. Experimental results demonstrate that our approach enhances the realism and coherence of semantic gestures. Extensive experiments and user studies show that our method outperforms state-of-the-art approaches across two benchmarks in co-speech gesture generation in both objective and subjective metrics.

SemGes: Semantic-Aware Gesture Generation

🛠️ SemGes Framework Overview

SemGes Framework

SemGes employs three training pathways:

(1) Global semantic coherence, which minimizes latent disparities between gesture and text encoders.

(2) Multimodal Quantization learning, where integrated multimodal representation codes are aligned with quantized motion to decode them into hand and body movements.

(3) Semantic relevance learning, which emphasizes semantic gestures.

📊 Main Results

Our model achieves state-of-the-art performance across multiple benchmarks with significant improvements in BEAT and TED Expressive datasets .

SemGes Framework

👥 User Study

We conducted extensive user studies to evaluate the naturalness, diversity, and alignment with speech content and timing of our generated gestures. The results confirm that our model outperforms existing approaches.

SemGes Framework
SemGes: semantically generated gestures from our model.

🙏 Acknowledgements

We would like to sincerely thank Sachit Mirsha for his generous support in rendering the avatar animations used in this project.

📚 Citation

If you find our work useful for your research, please consider citing:

                @misc{liu2025semgessemanticsawarecospeechgesture,
                    title={SemGes: Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning}, 
                    author={Lanmiao Liu and Esam Ghaleb and Aslı Özyürek and Zerrin Yumak},
                    year={2025},
                    eprint={2507.19359},
                    archivePrefix={arXiv},
                    primaryClass={cs.CV},
                    url={https://arxiv.org/abs/2507.19359},}