Dynamic Gaussian Mesh:
High Fidelity Time-Consistent Mesh Reconstruction from Monocular Videos
Isabella Liu, Hao Su , Xiaolong Wang
UC San Diego

Abstract

Modern 3D engines and graphics applications require mesh as a memory-efficient representation, which allows texture editing and fast physically based rendering. However, it is still very difficult to obtain high-quality mesh from daily visual observations, and the problem becomes even more challenging for dynamic scenes and objects. We introduce Dynamic Gaussians Mesh (DG-Mesh), a framework to reconstruct a high-fidelity and time-consistent mesh given a single monocular video. Our work leverages the recent advancement in 3D Gaussian Splatting, which not only achieves better rendering results but also provides an explicit point cloud representation through rendering. Building on top of this representation, DG-Mesh learns to convert the point clouds to a mesh and track the mesh vertices over time. In our experiments, DG-Mesh not only provides significantly better reconstruction than baselines, especially when it comes to thin structures (e.g., bird wings, butterflies) but also enables downstream applications such as texture editing given the learned correspondences.

Method Pipeline


Video


Mesh Extraction and Rendering






Application: Dynamic Texture Editing


Real Reconstruction Results

Acknowledgements

The website template was borrowed from BakedSDF and Ref-NeRF.