SDFRaster: Distance Field Rasterization for End-to-End Mesh Reconstruction

SIGGRAPH 2026

Jinkai Cui, Kaiwen Song, Chumeng Niu, Juyong Zhang
University of Science and Technology of China

SDFRaster introduces a rasterizable SDF representation for end-to-end mesh reconstruction from multi-view images. With this representation, our method achieves high-quality mesh reconstruction.

Abstract

Rasterization based methods have recently enabled high-quality novel view synthesis at real-time rates, but their underlying volumetric primitives do not expose a direct, globally consistent surface representation, leaving surface extraction to heuristic post-processing. In contrast, implicit signed distance field (SDF) methods provide well-defined surfaces but are typically optimized with computationally expensive ray marching. We propose \textbf{SDFRaster}, a rasterizable SDF representation that bridges this gap by combining the efficiency of rasterization with signed distance field for end-to-end mesh reconstruction. Starting from a Delaunay tetrahedralization, we optimize a continuous SDF over a tetrahedral grid and render it efficiently by rasterizing tetrahedra and alpha-compositing their contributions. We further integrate differentiable Marching Tetrahedra into the optimization loop, enabling end-to-end mesh reconstruction without post-processing mesh extraction. Experiments on DTU and Tanks and Temples demonstrate that SDFRaster achieves higher-quality and more complete surface reconstructions with lower storage cost than state-of-the-art approaches.

Highlight

01

Adaptive Tetrahedral SDF

Using a Delaunay tetrahedral grid as the geometric carrier allows adaptive mesh extraction instead of relying on a fixed-resolution grid, producing more compact meshes while preserving geometric detail.

02

SDF Rasterization

We efficiently render the learned SDF by rasterizing tetrahedra and alpha-compositing SDF-derived opacities, avoiding expensive ray marching during optimization.

03

End-to-End Mesh Reconstruction

We apply differentiable Marching Tetrahedra during optimization so meshes can be extracted during training, enabling end-to-end reconstruction without post-processing and allowing us to directly supervise the mesh.

Motivation

Rasterization-based methods enable efficient optimization, but their volumetric primitives do not provide a direct and globally consistent surface representation, often requiring heuristic post-processing for mesh extraction. Implicit SDF methods define surfaces explicitly, yet usually rely on expensive ray marching during training. SDFRaster bridges these two families by combining the efficiency of rasterization with a well-defined signed distance field for end-to-end mesh reconstruction.

Method

We learn a continuous SDF on a Delaunay tetrahedral grid, using a shared multi-resolution hash encoder to predict SDF values at vertices and appearance per tetrahedron. We render the images by rasterizing tetrahedra and alpha-compositing SDF-derived opacities. We apply differentiable Marching Tetrahedra on the tetrahedral grid with the learned SDF values to extract meshes in the optimization loop, enabling end-to-end mesh reconstruction.

Results

Experiments on DTU and Tanks and Temples show that SDFRaster achieves higher-quality and more complete surface reconstructions than existing rasterization-based baselines while keeping the extracted meshes compact. Compared with methods that rely on post-processing or depth fusion, our approach preserves cleaner topology, sharper edges, and better thin-structure recovery.

Background Reconstruction

Beyond reconstructing the main object, SDFRaster can also recover background regions thanks to its globally consistent geometry field and adaptive tetrahedral representation. In contrast, 2DGS-style mesh extraction pipelines often focus on the foreground object and miss surrounding scene structures.

BibTeX

@inproceedings{cui2026sdfraster,
  title     = {Distance Field Rasterization for End-to-End Mesh Reconstruction},
  author    = {Cui, Jinkai and Song, Kaiwen and Niu, Chumeng and Zhang, Juyong},
  booktitle = {ACM SIGGRAPH 2026 Conference Papers},
  year      = {2026}
}