RaymondJiangkw / COGS

[SIGGRAPH'24] A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose
https://raymondjiangkw.github.io/cogs.github.io/
Other
68 stars 6 forks source link
gaussian-splatting sparse-view

A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose
Official PyTorch implementation of the ACM SIGGRAPH 2024 paper

Teaser image

A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose
Kaiwen Jiang, Yang Fu, Mukund Varma T, Yash Belhe, Xiaolong Wang, Hao Su, Ravi Ramamoorthi

Paper | Project | Video

Abstract: Novel view synthesis from a sparse set of input images is a challenging problem of great practical interest, especially when camera poses are absent or inaccurate. Direct optimization of camera poses and usage of estimated depths in neural radiance field algorithms usually do not produce good results because of the coupling between poses and depths, and inaccuracies in monocular depth estimation. In this paper, we leverage the recent 3D Gaussian splatting method to develop a novel construct-and-optimize method for sparse view synthesis without camera poses. Specifically, we construct a solution progressively by using monocular depth and projecting pixels back into the 3D world. During construction, we optimize the solution by detecting 2D correspondences between training views and the corresponding rendered images. We develop a unified differentiable pipeline for camera registration and adjustment of both camera poses and depths, followed by back-projection. We also introduce a novel notion of an expected surface in Gaussian splatting, which is critical to our optimization. These steps enable a coarse solution, which can then be low-pass filtered and refined using standard optimization methods. We demonstrate results on the Tanks and Temples and Static Hikes datasets with as few as three widely-spaced views, showing significantly better quality than competing methods, including those with approximate camera pose information. Moreover, our results improve with more views and outperform previous InstantNGP and Gaussian Splatting algorithms even when using half the dataset.

Requirements

Getting started

Quick Demo

We provide a quick demo for you to play with.

Dataset Preparation

Afterwards, you need to run following commands to estimate monocular depths and semantic masks.

python preprocess_1_estimate_monocular_depth.py -s <path to the dataset>
python preprocess_2_estimate_semantic_mask.py -s <path to the dataset>

Training

To train a scene, after preprocessing, please use

python train.py -s <path to the dataset> --eval --num_images <number of trainig views>
Interface for available training options (you can find default values in the 'arguments/__init__.py'): Options used for constructing a coarse solution: | Argument | Type | Description | |:--------:|:----:|:-----------:| | `rotation_finetune_lr` | `float` | Learning rate for the quaternion of camera | | `translation_finetune_lr` | `float` | Learning rate for the translation of camera | | `scale_finetune_lr` | `float` | Learning rate for the scaling per primitive for aligning the monocular depth | | `shift_finetune_lr` | `float` | Learning rate for the translation per primitive for aligning the monocular depth | | `register_steps` | `int` | Number of optimization steps for registering the camera pose | | `align_steps` | `int` | Number of optimization steps for adjusting both the camera pose and monocular depth | Options used for refinement: | Argument | Type | Description | |:--------:|:----:|:-----------:| | `iterations` | `int` | Number of iterations for optimization. If this is changed, other relevant options should also be adjusted. | | `depth_diff_tolerance` | `int` | Threshold of difference between aligned depth and rendered depth to be considered as unobserved regions | | `farest_percent` | `float` | Percent of retained number of points after farest point down-sampling | | `retain_percent` | `float` | Percent of retained number of points after uniform down-sampling | | `add_frame_interval` | `int` | Interval of training views which are back-projected after registration and adjustment | | `scale_and_shift_mode` | `'mask'` or `'whole'` | Align the monocular depth either per primitive based on mask, or as a whole | Other hyper-parameters should be self-explaining.

Testing

After a scene is trained, please first use

python eval.py -m <path to the saved model> --load_iteration <load iteration>

to estimate the extrinsics of testing views. If ground-truth extrinsics are provided, it will calculate the metrics of estimated extrinsics of training views as well.

After registering the testing views, please use render.py and metrics.py to evaluate the novel view synthesis performance.

Tips

As to training, you may need to tweak the hyper-parameters to adapt to different scenes for best performance. For example,

As to testing, we evaluate at both 3000 and 9000 iterations' checkpoints, and use the better one.

FAQs

Acknowledgement

This project is built upon 3DGS. We also utilize FC-CLIP, MariGold, and QuadTreeAttention. We thank authors for their great repos.

Citation

@article{COGS2024,
    title={A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose},
    author={Jiang, Kaiwen and Fu, Yang and Varma T, Mukund and Belhe, Yash and Wang, Xiaolong and Su, Hao and Ramamoorthi, Ravi},
    journal={SIGGRAPH},
    year={2024}
}