Codes for: "SPIDR: SDF-based Neural Point Fields for Illumination and Deformation"
UPDATE 06/04: Include more trained checkpoints (synthetic + blendedmvs).
UPDATE 02/14: Tested inference code on a machine (RTX2070) with new env. Works fine.
git clone https://github.com/nexuslrf/SPIDR.git
cd SPIDR
Environment
pip install -r requirements.txt
Note:
requirements.txt
(e.g., torch
and torch_scatter
) might need different cmd to install.open3d
has to be >=0.16
Torch extensions
We replaced original PointNeRF's pycuda kernels (we don't need pycuda
) with torch extensions. To set up our torch extensions for ray marching:
cd models/neural_points/c_ext
python setup.py build_ext --inplace
cd -
We have tested our codes on torch 1.8, 1.10, 1.11.
Download the dataset from the following links and put them under ./data_src/
directory:
./data_src/nerf_synthetic
)./data_src/BlendedMVS
)./data_src/deform_synthetic
) (manikin
+ trex
, with blender sources)We provide some model checkpoints for testing (more will be added in the future)
checkpoints/MVSNet
Note: We'll add more instructions later, currently might be buggy (NOT TESTED).
First stage: train a Point-based NeRF model, this step is similar to the original PointNeRF.
cd run/
python train_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=sdf
Second stage: train the BRDF + Environment light MLPs
The second stage of the training requires pre-computing the depth maps from the light sources
python test_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=sdf --bake_light --down_sample=0.5
--down_sample=0.5
halve the size of the rendered depth images.
Then started BDRF branch training:
python train_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=lighting
We use manikin scene as an example.
To simply render frames (SPIDR* in the paper):
cd run/
python test_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=sdf --split=test
You can set a smaller --random_sample_size
according to the GPU memory.
For the rendering with BDRF estimations.
We need to first bake the depth maps from the light sources. If you did it during the training BDRF, you don't need to run it again (but it requires updates if the object shape is changed).
python test_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=sdf --bake_light --down_sample=0.5
Then, with the baked light depth maps, we can run the BRDF-based rendering branch.
python test_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=lighting --split=test
Note on the output images: *-coarse_raycolor.png
are the results without BRDF estimation (just normal NeRF rendering, coresponding to SPIDR in the paper). `-brdf_combine_raycolor.png` are the results with BRDF estimation and PB rendering.
python test_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=sdf --marching_cube
cd ../deform_tools
python ckpt2pcd.py --save_dir ../checkpoints/nerfsynth_sdf/manikin --ckpt 120000_net_ray_marching.pth --pcd_file 120000_pcd.ply
We'll provide three examples of different editing:
Please check here for the examples.
P.S. Utilize some segmentation tools to assist the manual deformation (e.g., Point Selections) could be very interesting research direction.
š The 2D segmentation demo from Segment Anything, my intintial attempt is here: SAM-3D-Selector
Simply add target environment HDRI in --light_env_path
python test_ft.py --config ../dev_scripts/spidr/manikin.ini --run_mode=lighting --split=test --light_env_path=XXX.hdr
Note:
32x16
resolution before the relighting.--light_intensity
e.g., --light_intensity=1.7
šSDEX Aerial GUNDAM from TWFM (Captured at my lab)
š EVA Unit-01 Statue in Shanghai (from BlendedMVS dataset)
If you find our work useful in your research, a citation will be appreciated š¤:
@article{liang2022spidr,
title={SPIDR: SDF-based Neural Point Fields for Illumination and Deformation},
author={Liang, Ruofan and Zhang, Jiahao and Li, Haoda and Yang, Chen and Guan, Yushi and Vijaykumar, Nandita},
journal={arXiv preprint arXiv:2210.08398},
year={2022}
}
This codebase is developed based on Point-NeRF. If you have any confusion about MVS and point initialization part, we recommend referring to their original repo.