We present HAHA - a novel approach for animatable human avatar generation from monocular input videos.
The proposed method relies on learning the trade-off between the use of Gaussian splatting and a textured mesh for efficient and high fidelity rendering. We demonstrate its efficiency to animate and render full-body human avatars controlled via the SMPL-X parametric model. Our model learns to apply Gaussian splatting only in areas of the SMPL-X mesh where it is necessary, like hair and out-of-mesh clothing. This results in a minimal number of Gaussians being used to represent the full avatar, and reduced rendering artifacts. This allows us to handle the animation of small body parts such as fingers that are traditionally disregarded.
We demonstrate the effectiveness of our approach on two open datasets: SnapshotPeople and X-Humans. Our method demonstrates on par reconstruction quality to the state-of-the-art on SnapshotPeople, while using less than a third of Gaussians. HAHA outperforms previous state-of-the-art on novel poses from X-Humans both quantitatively and qualitatively.
Visualization of the number of Gaussians
Gaussians density visualization as a heatmap
@misc{svitov2024haha,
title={HAHA: Highly Articulated Gaussian Human Avatars with Textured Mesh Prior},
author={David Svitov and Pietro Morerio and Lourdes Agapito and Alessio Del Bue},
year={2024},
eprint={2404.01053},
archivePrefix={arXiv},
primaryClass={cs.CV}
}