LiveNVS: Neural View Synthesis on Live RGB-D Streams

Fink L, Rückert D, Franke L, Keinert J, Stamminger M (2023)


Publication Language: English

Publication Type: Conference contribution

Publication year: 2023

Conference Proceedings Title: Siggraph Asia 2023 Proceedings

Event location: Sydney AU

URI: https://arxiv.org/abs/2311.16668

Abstract

Existing real-time RGB-D reconstruction approaches, like Kinect Fusion, lack real-time photo-realistic visualization. This is due to noisy, oversmoothed or incomplete geometry and blurry textures which are fused from imperfect depth maps and camera poses. Recent neural rendering methods can overcome many of such artifacts but are mostly optimized for offline usage, hindering the integration into a live reconstruction pipeline.
In this paper, we present LiveNVS, a system that allows for neural novel view synthesis on a live RGB-D input stream with very low latency and real-time rendering. Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target view via a densely fused depth map and aggregating the features in image-space to a target feature map. A generalizable neural network then translates the target feature map into a high-quality RGB image. LiveNVS achieves state-of-the-art neural rendering quality of unknown scenes during capturing, allowing users to virtually explore the scene and assess reconstruction quality in real-time.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Fink, L., Rückert, D., Franke, L., Keinert, J., & Stamminger, M. (2023). LiveNVS: Neural View Synthesis on Live RGB-D Streams. In ACM (Eds.), Siggraph Asia 2023 Proceedings. Sydney, AU.

MLA:

Fink, Laura, et al. "LiveNVS: Neural View Synthesis on Live RGB-D Streams." Proceedings of the Siggraph Asia 2023, Sydney Ed. ACM, 2023.

BibTeX: Download