wutong16 / Voxurf

[ ICLR 2023 Spotlight ] Pytorch implementation for "Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction"
Other
399 stars 28 forks source link

Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction

Tong WuJiaqi WangXingang PanXudong XuChristian TheobaltZiwei LiuDahua Lin
Accepted to ICLR 2023 (Spotlight) Paper

https://user-images.githubusercontent.com/28827385/222728479-af81dc68-6a15-4ab1-8632-5cbe3fcc17ad.mp4

Updates

Installation

Please first install a suitable version of Pytorch and torch_scatter on your machine. We tested on CUDA 11.1 with Pytorch 1.10.0.

git clone git@github.com/wutong16/Voxurf.git
cd Voxurf
pip install -r requirements.txt

Datasets

Public datasets

Extract the datasets to ./data/.

Custom data

For your own data (e.g., a video or multi-view images), go through the preprocessing steps below.

Preprocessing (click to expand) - Please install [COLMAP](https://colmap.github.io/) and [rembg](https://github.com/danielgatis/rembg) first. - Extract video frames (if needed), remove the background, and save the masks. ``` mkdir data/ cd tools/preprocess bash run_process_video.sh ../../data/ ``` - Estimate camera poses using COLMAP, and normalize them following [IDR](https://github.com/lioryariv/idr/blob/main/DATA_CONVENTION.md). ``` bash run_convert_camera.sh ../../data/ ``` - Finally, use `configs/custom_e2e` and run with `--scene `.

Running

Training

DTU example

bash single_runner.sh configs/dtu_e2e exp 122


- To train without foreground mask on DTU:

DTU example

bash single_runner_womask.sh configs/dtu_e2e_womask exp 122


- To train without foreground mask on MobileBrick. The full evaluation on MobileBrick compared with other methods can be found [here](https://code.active.vision/MobileBrick/#:~:text=4.74-,Voxurf,-RGB).

MobileBrick example

bash single_runner_womask.sh configs/mobilebrick_e2e_womask/ exp


> **Note**
> For Windows users, please use the provided batch scripts with extension`.bat` instead of the bash scripts with extension `.sh`
> Additionally, the forward slashes `/` in the paths should be replaced with backslashes `\`.
> A batch script can be run simply through `<script_name>.bat <arg1> ... <argN>`.

### NVS evaluation

python run.py --config /fine.py -p --sdf_mode voxurf_fine --scene --render_only --render_test


### Extracting the mesh & evaluation

python run.py --config /fine.py -p --sdf_mode voxurf_fine --scene --render_only --mesh_from_sdf

Add `--extract_color` to get a **colored mesh** as below. It is out of the scope of this work to estimate the material, albedo, and illumination. We simply use the normal direction as the view direction to get the vertex colors.

![colored_mesh (1)](https://user-images.githubusercontent.com/28827385/222783393-63216e57-489c-46fb-9c24-4c8b6eed83bf.png)

## Citation
If you find the code useful for your research, please cite our paper.

@inproceedings{wu2022voxurf, title={Voxurf: Voxel-based Efficient and Accurate Neural Surface Reconstruction}, author={Tong Wu and Jiaqi Wang and Xingang Pan and Xudong Xu and Christian Theobalt and Ziwei Liu and Dahua Lin}, booktitle={International Conference on Learning Representations (ICLR)}, year={2023}, }



## Acknowledgement 
Our code is heavily based on [DirectVoxGO](https://github.com/sunset1995/DirectVoxGO) and [NeuS](https://github.com/Totoro97/NeuS). Some of the preprocessing code is borrowed from [IDR](https://github.com/lioryariv/idr/blob/main/DATA_CONVENTION.md) and [LLFF](https://github.com/Fyusion/LLFF).
Thanks to the authors for their awesome works and great implementations! Please check out their papers for more details.