Mathias Harrer1 , Linus Franke1 , Laura Fink1,2 , Marc Stamminger1 , Tim Weyrich1 \ 1 Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany\ 2 Fraunhofer IIS, Erlangen, Germany\
Abstract: Novel-view synthesis is an ill-posed problem in that it requires inference of previously unseen information. Recently, reviving the traditional field of image-based rendering, neural methods proved particularly suitable for this interpolation/extrapolation task; however, they often require a-priori scene-completeness or costly preprocessing steps and generally suffer from long (scene-specific) training times. Our work draws from recent progress in neural spatio-temporal supersampling to enhance a state-of-the-art neural renderer's ability to infer novel-view information at inference time. We adapt a supersampling architecture [Xiao et al. 2020], which resamples previously rendered frames, to instead recombine nearby camera images in a multi-view dataset. These input frames are warped into a joint target frame, guided by the most recent (point-based) scene representation, followed by neural interpolation. The resulting architecture gains sufficient robustness to significantly improve transferability to previously unseen datasets. In particular, this enables novel applications for neural rendering where dynamically streamed content is directly incorporated in a (neural) image-based reconstruction of a scene. As we will show, our method reaches state-of-the-art performance when compared to previous works that rely on static and sufficiently densely sampled scenes; in addition, we demonstrate our system's particular suitability for dynamically streamed content, where our approach is able to produce high-fidelity novel-view synthesis even with significantly fewer available frames than competing neural methods.
[Webpage][Paper PDF][Paper PDF low-res][Video][Bibtex]
C++
with OpenGL
and libTorch
Python
with PyTorch
sudo apt-get install cmake make g++ libx11-dev libxi-dev libgl1-mesa-dev libglu1-mesa-dev libxrandr-dev libxext-dev libxcursor-dev libxinerama-dev libxi-dev libglew-dev libglfw3-dev
C++
application:
./create_environment.sh # create conda environment inovis used for the training in Python
./install_pytorch_precompiled.sh # install libtorch to the conda enironment folder
./build_inovis.sh # build the C++ executable
See the corresponding folders for the respective applications:
left: capture image; right: novel view
left: capture image; right: novel view
left: novel view; right: ground truth
left: novel view; right: ground truth