ELITE synthesizes an animatable photorealistic Gaussian head avatar from a casual monocular video. ELITE leverages the benefits of both 3D data prior and 2D generative prior to compensate for the missing visual cues from the input video.
Abstract
We introduce ELITE, an Efficient Gaussian head avatar synthesis from a monocular video via Learned Initialization and TEst-time generative adaptation.
Prior works rely either on a 3D data prior or a 2D generative prior to compensate for missing visual cues in monocular videos.
However, 3D data prior methods often struggle to generalize in-the-wild, while 2D generative prior methods are computationally heavy and prone to identity hallucination.
We identify a complementary synergy between these two priors and design an efficient system that achieves high-fidelity animatable avatar synthesis with strong in-the-wild generalization.
Specifically, we introduce a feed-forward Mesh2Gaussian Prior Model (MGPM) that enables fast initialization of a Gaussian avatar.
To further bridge the domain gap at test time, we design a test-time generative adaptation stage, leveraging both real and synthetic images as supervision.
Unlike previous full diffusion denoising strategies that are slow and hallucination-prone, we propose a rendering-guided single-step diffusion enhancer that restores missing visual details, grounded on Gaussian avatar renderings.
Our experiments demonstrate that ELITE produces visually superior avatars to prior works, even for challenging expressions, while achieving 60x faster synthesis than the 2D generative prior method.
Explainer Video (Contains Narration)
Results: Gaussian avatar cross re-enactment
ELITE synthesizes authentic, ID-preserving avatars for diverse attributes, e.g., races, genders, ages, and hairstyles, even when adapted using only 3 frames from an input monocular video.
Key idea: Synergistic collaboration of 3D data prior & 2D generative prior
We identify a mutually reinforcing synergy between a 3D data prior and a 2D generative prior.
Our key idea is that (1) the limitations of 3D data prior methods, i.e., hard to generalize in-the-wild, can be alleviated if supervised by synthetic images from a generative model, and (2) slow sampling and hallucinations of 2D generative prior methods can be mitigated if grounded on 3D avatar renderings.
Avatar re-enactment comparison (Unseen ID & expressions)
We compare the quality of the ELITE-synthesized avatars with recent competing methods.
ELITE produces Gaussian avatars with better identity preservation (iris color, hair style), as well as stronger generalization to novel head poses and fine-grained expressions, including gaze changes and one-eye winking.
Please find more results in the paper and the explainer video.
Citation
@article{youwang2026elite,
title = {ELITE: Efficient Gaussian Head Avatar from a Monocular Video via Learned Initialization and TEst-time Generative Adaptation},
author = {Youwang, Kim and Hyoseok, Lee and Subin, Park and Pons-Moll, Gerard and Oh, Tae-Hyun},
journal = {arXiv preprint:},
year = {2026}
}