Learn2Control

Controlling Avatar Diffusion with Learnable Gaussian Embedding

Xuan Gao     Jingtao Zhou     Dongyu Liu     Yuqi Zhou      Juyong Zhang
University of Science and Technology of China    

Contributions:
(1) We propose a novel diffusion control signal representation splatted from learnable Gaussians, which is dense, adaptive, expressive, and 3D-consistent.
(2) We use synthetic data to improve the ability of the trained model, and use real/synthetic labels to eliminate the impact of artifacts in synthetic data on generated results.

Abstract

Recent advances in diffusion models have made significant progress in digital human generation. However, most existing models still struggle to maintain 3D consistency, temporal coherence, and motion accuracy. A key reason for these shortcomings is the limited representation ability of commonly used control signals(e.g., landmarks, depth maps, etc.). In addition, the lack of diversity in identity and pose variations in public datasets further hinders progress in this area. In this paper, we analyze the shortcomings of current control signals and introduce a novel control signal representation that is optimizable, dense, expressive, and 3D consistent. Our method embeds a learnable neural Gaussian onto a parametric head surface, which greatly enhances the consistency and expressiveness of diffusion-based head models. Regarding the dataset, we synthesize a large-scale dataset with multiple poses and identities. In addition, we use real/synthetic labels to effectively distinguish real and synthetic data, minimizing the impact of imperfections in synthetic data on the generated head images. Extensive experiments show that our model outperforms existing methods in terms of realism, expressiveness, and 3D consistency.

Motivation

Method

To address the limitations of existing public datasets in terms of identity diversity and pose richness, we propose to use synthetic data to improve the generalization ability and view consistency of the trained model. We first track the FLAME coefficients of the driving frames. Then the learnable Gaussians in UV space are transformed to 3D space according to FLAME UV mapping. Subsequently, the transformed Gaussians are projected and splatted to serve as control signals for a reference-guided diffusion model.

Results

Novel View Synthesis

Given a single reference image, we manipulate the poses of the generated head images by adjusting the pose parameters of the FLAME head model. Our method produces reasonable and consistent results even for large pose variations. This demonstrates that our learnable Gaussian embedding, combined with training on a synthetic dataset, effectively enhances the 3D consistency of diffusion models.

Comparison

We compare our work with Follow-Your-Emoji, GAGAvatar, GAGAvatar, X-Portrait, VOODOO 3D, and ROME. Our method remarkably outperforms other methods in expressiveness and consistency.

Ablation Study

Without training on our synthetic dataset, the model may fail to synthesize head images in large poses. If the model is trained without Real/Synthetic labels, the artifacts of the synthesized samples may influence the generated results.
Some previous diffusion based portrait generation models employ landmarks, normal maps, or depth maps as signals to control diffusion generation, which often fails to accurately generate images that meet the required expressions and poses. This primarily stems from the sparsity of landmarks, as well as the low-frequency nature and 3D inconsistency of normal and depth information. Our learnable Gaussian feature map is a dense, adaptive, expressive, and 3D consistent control signal representation. It showcased better quality in controlling head motion generation.

BibTeX

If you find our paper useful for your work please cite:


@misc{gao2025controllingavatardiffusionlearnable,
  title={Controlling Avatar Diffusion with Learnable Gaussian Embedding}, 
  author={Xuan Gao and Jingtao Zhou and Dongyu Liu and Yuqi Zhou and Juyong Zhang},
  year={2025},
  eprint={2503.15809},
  archivePrefix={arXiv},
  primaryClass={cs.GR},
  url={https://arxiv.org/abs/2503.15809}, 
}
    

Acknowledgements

This research was supported by the National Natural Science Foundation of China (No.62441224, No.62272433). The numerical calculations in this paper have been done on the supercomputing system in the Supercomputing Center of University of Science and Technology of China.