NePO: Neural Point Octrees for Large-Scale Novel View Synthesis

Lewis N, Rückert D, Stamminger M, Franke L (2025)


Publication Type: Journal article

Publication year: 2025

Journal

DOI: 10.1111/cgf.70287

Abstract

Point-based radiance field rendering produces impressive results for novel-view synthesis tasks. Established methods work with object-centric datasets or room-sized scenes, as computational resources and model capabilities are limited. To overcome this limitation, we introduce neural point octrees (NePOs) to radiance field rendering, which enables optimisation and rendering of large-scale datasets at varying detail levels, including different acquisition modalities, such as camera drones and LiDAR vehicles. Our method organises input point clouds into an octree from the bottom up, enabling level of detail (LOD) selection during rendering. Appearance descriptors for each point are optimised using the RGB captures, enabling our system to self-refine and address real-world challenges such as capture coverage discrepancies and SLAM pose drift. The refinement is achieved by adaptively densifying octree nodes during training and optimising camera poses via gradient descent. Overall, our approach efficiently optimises scenes with thousands of images and renders scenes containing hundreds of millions of points in real time.

Authors with CRIS profile

How to cite

APA:

Lewis, N., Rückert, D., Stamminger, M., & Franke, L. (2025). NePO: Neural Point Octrees for Large-Scale Novel View Synthesis. Computer Graphics Forum. https://doi.org/10.1111/cgf.70287

MLA:

Lewis, Noah, et al. "NePO: Neural Point Octrees for Large-Scale Novel View Synthesis." Computer Graphics Forum (2025).

BibTeX: Download