Dynamic Gaussian Mesh:
Consistent Mesh Reconstruction from Monocular Videos
Isabella Liu, Hao Su , Xiaolong Wang
UC San Diego

denotes equal advisory

D-NeRF Jumpingjacks Trajectory


Modern 3D engines and graphics pipelines require mesh as a memory-efficient representation, which allows efficient rendering, geometry processing, texture editing, and many other downstream operations. However, it is still highly difficult to obtain high-quality mesh in terms of structure and detail from monocular visual observations. The problem becomes even more challenging for dynamic scenes and objects. To this end, we introduce Dynamic Gaussians Mesh (DG-Mesh), a framework to reconstruct a high-fidelity and time-consistent mesh given a single monocular video. Our work leverages the recent advancement in 3D Gaussian Splatting to construct the mesh sequence with temporal consistency from a video. Building on top of this representation, DG-Mesh recovers high-quality meshes from the Gaussian points and can track the mesh vertices over time, which enables applications such as texture editing on dynamic objects. We introduce the Gaussian-Mesh Anchoring, which encourages evenly distributed Gaussians, resulting better mesh reconstruction through mesh-guided densification and pruning on the deformed Gaussians. By applying cycle-consistent deformation between the canonical and the deformed space, we can project the anchored Gaussian back to the canonical space and optimize Gaussians across all time frames. During the evaluation on different datasets, DG-Mesh provides significantly better mesh reconstruction and rendering than baselines.



Training Process


4D GS Center

Anchored GS center


Mesh Rendering


Dynamic Mesh on D-NeRF Dataset (Interactive 3D Viewer)


Ours (DG-Mesh)



Dynamic Mesh on DG-Mesh Dataset (Interactive 3D Viewer)


Real Results on Real Data

Nerfies: Toby-sit

Unbiased4D: Real Cactus

iPhone Captured Video



    title={Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos}, 
    author={Isabella Liu and Hao Su and Xiaolong Wang},


The website template was borrowed from BakedSDF and HexPlane.