Multi-Canonical Dynamic Gaussian Splatting
for Accurate Geometry and High-Fidelity Rendering

1Department of Metaverse Convergence, Chung-Ang University, South Korea    Corresponding Author
Our method achieves both high-fidelity rendering and geometrically accurate mesh reconstruction, resolving the trade-off inherent in prior dynamic Gaussian Splatting methods.

Existing dynamic Gaussian Splatting methods face a persistent trade-off between rendering quality and mesh accuracy. Our method resolves this by jointly improving both, yielding sharp renderings and clean mesh geometry across diverse dynamic scenes.

Abstract

While 3D Gaussian Splatting (3DGS) has enabled high-quality reconstruction of dynamic scenes, a persistent tension remains between Novel View Synthesis (NVS) fidelity and mesh geometric accuracy. We argue that this arises because deformation modeling, surface alignment, and appearance expressiveness compete for shared representational degrees of freedom. To mitigate this rendering–geometry trade-off, we propose Multi-Canonical Dynamic Gaussian Splatting.

Multi-Canonical Deformation Modeling (MCDM) partitions the sequence into localized temporal segments to reduce motion over-smoothing. Surface-Aligned Anisotropic Densification (SAAD) initializes 2D Gaussians with scales and rotations derived from local face geometry for more faithful surface support. Residual-Based Expressiveness Restoration (RBER) restores fine appearance details through sequential 3D and 2D residual refinement on top of a geometry-stabilized base. Experiments on DG-Mesh, D-NeRF, and Nerfies show that our method improves the rendering–geometry balance, yielding consistent gains in rendering quality while preserving or improving mesh accuracy.

Video Results

Each video shows our reconstructed dynamic scene (Gaussian rendering + mesh) across time.

Girlwalk

Hook

Jumping Jacks

Stand Up

Video Comparison

Drag the divider left or right to compare our method with DG-Mesh.

Horse

DG-Mesh Ours

Beagle

DG-Mesh Ours

Bird

DG-Mesh Ours

3D Mesh Results

Drag to rotate  ·  Scroll to zoom  ·  Right-drag to pan

Bird

drag / scroll / right-drag

Duck

drag / scroll / right-drag

T-Rex

drag / scroll / right-drag

Method

Overview of the proposed framework integrating MCDM, SAAD, and RBER.

Our pipeline integrates three core modules that progressively decouple the competing objectives in dynamic Gaussian reconstruction — temporal deformation span, geometry-support formation, and residual appearance modeling.

① Multi-Canonical Deformation Modeling (MCDM)
Addresses excessive temporal deformation span by partitioning the sequence into K localized temporal groups, each assigned to an independent canonical Gaussian set. A global warm-up stage first establishes a sequence-level prior, which is then cloned into K group-specific canonical sets. Since each set covers only a restricted temporal segment, the deformation network operates over a reduced motion range, mitigating over-smoothing and improving per-frame fidelity.
② Surface-Aligned Anisotropic Densification (SAAD)
Corrects the geometry-driven bias of conventional isotropic densification under mesh supervision. Instead of inserting primitives at face centroids with isotropic scales, SAAD derives the initial covariance of each new Gaussian from the eigendecomposition of local face vertex covariance, aligning its tangent-plane support with the local surface structure. This turns densification from a purely coverage-seeking operation into a surface-aligned refinement process, improving both geometric coverage and mesh accuracy.
③ Residual-Based Expressiveness Restoration (RBER)
Recovers the fine-scale appearance capacity attenuated by strong surface supervision. Built on top of the geometry-stabilized base, RBER employs two sequential residual stages: a 3D Gaussian deformation residual network (fDR) that applies local parameter corrections to each Gaussian, followed by a 2D image residual branch using a Local Texture Estimator (fLTE) that recovers remaining high-frequency texture discrepancy. This restores appearance details without compromising the geometric consistency established by MCDM and SAAD.

Qualitative Comparison

Qualitative comparison with DG-Mesh and Dynamic-2DGS on DG-Mesh and D-NeRF datasets.

We compare against state-of-the-art dynamic reconstruction methods — DG-Mesh and Dynamic-2DGS — on the DG-Mesh and D-NeRF datasets. For each method, we show both the direct Gaussian Splatting (GS) rendering and the extracted mesh surface. DG-Mesh yields blurry renderings, and Dynamic-2DGS produces fragmented or incomplete surfaces (e.g., missing parts in Horse). Our method reconstructs sharp appearances with topologically clean meshes, achieving superior performance in both rendering fidelity and geometric accuracy.

Ablation Study

Effectiveness of MCDM

Ablation on MCDM: baseline vs. multi-canonical deformation across an extended temporal sequence.

We compare the baseline (single canonical space) against MCDM (multiple canonical spaces) across an extended temporal sequence (T = 0, …, 10). The baseline forces a single canonical representation to span the entire motion range, causing the deformation network to over-smooth time-specific structures, leading to blurred renderings and incoherent geometry. MCDM redistributes temporal modeling burden across K canonical sets. Since each canonical set covers only a localized segment, per-frame fidelity improves substantially, yielding sharper appearances and more coherent mesh reconstruction.


Effectiveness of SAAD

Ablation on SAAD: isotropic densification vs. surface-aligned anisotropic densification.

We compare isotropic densification (baseline) against our Surface-Aligned Anisotropic Densification (SAAD). Under mesh supervision, isotropic primitives inserted at face centroids ignore local surface anisotropy, progressively degrading rendering quality as they accumulate (visible blur at 10k → 25k iterations). SAAD derives the initial covariance of each new Gaussian from local face vertex statistics via eigendecomposition, producing surface-aligned primitives that preserve structural detail. The result is sharper boundaries and cleaner local structure throughout training.

BibTeX

@article{TODO,
  author  = {TODO},
  title   = {Multi-Canonical Dynamic Gaussian Splatting for Accurate Geometry and High-Fidelity Rendering},
  journal = {TODO},
  year    = {2025}
}