3D-FRONT-FUTURE / NeuDA

NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction (CVPR 2023)
MIT License
88 stars 2 forks source link

NeuDA

Project Page | Paper | Data

Official Pytorch implemntation of the paper "NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction", accepted to CVPR 2023.


We presents Deformable Anchors representation approach and a simple hierarchical position encoding strategy. The former maintains learnable anchor points at verities to enhance the capability of neural implicit model in handling complicated geometric structures, and the latter explores complementaries of high-frequency and low-frequency geometry properties in the multi-level anchor grid structure.

https://github.com/3D-FRONT-FUTURE/NeuDA/assets/5526396/3a25e3eb-ea57-4831-bbbb-3280addb2ddb

This repository implements the training / evaluation pipeline of our paper and provide a script to generate a textured mesh. Besides, based on reconstructed surface by NeuDA, we further adopt an adversarial texture optimization method to recover fine-detail texture.

Install

NeuDA - Surface Reconstruction

Clone this repository, and install environment.

$ git clone https://github.com/3D-FRONT-FUTURE/NeuDA.git
$ cd NeuDA
$ pip install -r requirements.txt

[Optional] Adversarial Texture Optimization

For adversarial texture optimization, you need to install Blender-3.4 that will be used to generate initial UVMap and texture. And make sure that blender-3.4 has been correctly set in your environment variables.

Compile the cuda rendering and cpp libraries.

$ sudo apt-get install libglm-dev libopencv-dev

$ cd NeuDA/models/texture/Rasterizer
$ ./compile.sh

$ cd ../CudaRender
$ ./compile.sh

Dataset

Public dataset

Custom data

Train NeuDA with your custom data:

  1. Please install COLMAP.
  2. Following 1st step General step-by-step usage to recover camera poses in LLFF.
  3. Using script tools/preprocess_llff.py to generate cameras_sphere.npz.

The data should be organized as follows:

<case_name>
|-- cameras_sphere.npz  # camera parameters
|-- image
    |-- 000.png         # target image for each view
    |-- 001.png
    ...
|-- mask
    |-- 000.png         # target mask each view (For unmasked setting, set all pixels as 255)
    |-- 001.png
    ...
|-- sparse
    |-- points3D.bin    # sparse point clouds
    ...
|-- poses_bounds.npy    # camera extrinsic & intrinsic params, details seen in LLFF

Here the cameras_sphere.npz follows the data format in IDR, where world_mat_xx denotes the world to image projection matrix, and scale_mat_xx denotes the normalization matrix.

Run

The corresponding log can be found in exp/<case_name>/<exp_name>/.

Citation

If you find the code useful for your research, please cite our paper.

@inproceedings{cai2023neuda,
  title={NeuDA: Neural Deformable Anchor for High-Fidelity Implicit Surface Reconstruction},
  author={Cai, Bowen and Huang, Jinchi and Jia, Rongfei and Lv, Chengfei and Fu, Huan},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Acknowledgement

Our code is heavily based on NeuS. Some of the evaluation and cuda rendering code is borrowed from NeuralWarp and AdversarialTexture with respectively. Thanks to the authors for their great work. Please check out their papers for more details.