Please report bugs here and we will publish the bug fix and the latest updates
Please cite our paper DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction (NeurIPS 2019)
@incollection{NIPS2019_8340,
title = {DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction},
author = {Xu, Qiangeng and Wang, Weiyue and Ceylan, Duygu and Mech, Radomir and Neumann, Ulrich},
booktitle = {Advances in Neural Information Processing Systems 32},
editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
pages = {492--502},
year = {2019},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8340-disn-deep-implicit-surface-network-for-high-quality-single-view-3d-reconstruction.pdf}
}
Code contact: Qiangeng Xu* and Weiyue Wang*
pip install trimesh==2.37.20
cd {DISN}
mkdir checkpoint
cd checkpoint
wget https://www.dropbox.com/s/2ts7qc9w4opl4w4/SDF_DISN.tar?dl=0
tar -xvzf SDF_DISN.tar?dl=0
rm -rf SDF_DISN.tar?dl=0
cd ..
mkdir cam_est/checkpoint
cd cam_est/checkpoint
wget https://www.dropbox.com/s/hyv4lcvpfu0au9e/cam_DISN.tar?dl=0
tar -xvzf cam_DISN.tar?dl=0
rm -rf cam_DISN.tar?dl=0
cd ../../
change corresponding libary path in your system in isosurface/LIB_PATH
cd {DISN}
source isosurface/LIB_PATH
nohup python -u demo/demo.py --cam_est --log_dir checkpoint/SDF_DISN --cam_log_dir cam_est/checkpoint/cam_DISN --img_feat_twostream --sdf_res 256 &> log/create_sdf.log &
The result is demo/result.obj.
"raw_dirs_v1": {
"mesh_dir": "/ssd1/datasets/ShapeNet/ShapeNetCore.v1/",
"norm_mesh_dir": "/ssd1/datasets/ShapeNet/march_cube_objs_v1/",
"rendered_dir": "/ssd1/datasets/ShapeNet/ShapeNetRendering/",
"renderedh5_dir": "/ssd1/datasets/ShapeNet/ShapeNetRenderingh5_v1/",
"sdf_dir": "/ssd1/datasets/ShapeNet/SDF_v1/"
}
download the dataset following the instruction of https://www.shapenet.org/account/ (about 30GB)
cd {your download dir}
wget http://shapenet.cs.stanford.edu/shapenet/obj-zip/ShapeNetCore.v1.zip
unzip ShapeNetCore.v1.zip -d {your mesh_dir}
To directly download the generated sdf and model, follow the instruction here To generate sdf files and the reconstructed models by yourself (Please expect the script to run for several hours), please execute the following command lines Our data preparation used this paper Vega: non-linear fem deformable object simulator. Please also cite it if you use our code to generate sdf files
mkdir log
cd {DISN}
source isosurface/LIB_PATH
nohup python -u preprocessing/create_point_sdf_grid.py --thread_num {recommend 9} --category {default 'all', but can be single category like 'chair'} &> log/create_sdf.log &
## SDF folder takes about 9.0G, marching cube obj folder takes about 245G
wget http://cvgl.stanford.edu/data2/ShapeNetRendering.tgz
untar it to {your rendered_dir}
cd {DISN}
nohup python -u preprocessing/create_img_h5.py &> log/create_imgh5.log &
### train the camera poses of the original rendered image dataset.
nohup python -u cam_est/train_sdf_cam.py --log_dir checkpoint/{your training checkpoint dir} --gpu 0 --loss_mode 3D --learning_rate 2e-5 &> log/cam_3D_all.log &
### train the camera poses of the adding 2 more DoF augmented on the rendered image dataset.
nohup python -u cam_est/train_sdf_cam.py --log_dir checkpoint/{your training checkpoint dir} --gpu 2 --loss_mode 3D --learning_rate 1e-4 --shift --shift_weight 2 &> log/cam_3D_shift2_all.log &
### Create img_h5 to {renderedh5_dir_est} in your info.json, the default is only generate h5 of test images and cam parameters(about 5.3GB)
nohup python -u train_sdf_cam.py --img_h5_dir {renderedh5_dir_est} --create --restore_model checkpoint/cam_3D_all --log_dir checkpoint/{your training checkpoint dir} --gpu 0--loss_mode 3D --batch_size 24 &> log/create_cam_mixloss_all.log &
if train from scratch, you can load official pretrained vgg_16 by setting --restore_modelcnn; or you can --restore_model to your checkpoint to continue the training):
nohup python -u train/train_sdf.py --gpu 0 --img_feat_twostream --restore_modelcnn ./models/CNN/pretrained_model/vgg_16.ckpt --log_dir checkpoint/{your training checkpoint dir} --category all --num_sample_points 2048 --batch_size 20 --learning_rate 0.0001 --cat_limit 36000 &> log/DISN_train_all.log &
source isosurface/LIB_PATH
nohup python -u test/create_sdf.py --img_feat_twostream --view_num 24 --sdf_res 64 --batch_size 1 --gpu 0 --sdf_res 64 --log_dir checkpoint/{your training checkpoint dir} --iso 0.00 --category all &> log/DISN_create_all.log &
nohup python -u test/create_sdf.py --img_feat_twostream --view_num 24 --sdf_res 64 --batch_size 1 --gpu 3 --sdf_res 64 --log_dir checkpoint/{your training checkpoint dir} --iso 0.00 --category all --cam_est &> log/DISN_create_all_cam.log &
nohup python -u clean_smallparts.py --src_dir checkpoint/{your training checkpoint dir}/test_objs/65_0.0 --tar_dir checkpoint/{your training checkpoint dir}/test_objs/65_0.0 --thread_n 10 &> log/DISN_clean.log &
nohup python -u test/test_cd_emd.py --img_feat_twostream --view_num 24 --num_sample_points 2048 --gpu 0 --batch_size 24 --log_dir checkpoint/{your training checkpoint dir} --cal_dir checkpoint/{your training checkpoint dir}/test_objs/65_0.0 --category all &> log/DISN_cd_emd_all.log &
nohup python -u test/test_f_score.py --img_feat_twostream --view_num 24 --num_sample_points 2048 --gpu 0 --batch_size 24 --log_dir checkpoint/{your training checkpoint dir} --cal_dir checkpoint/{your training checkpoint dir}/test_objs/65_0.0 --category all --truethreshold 2.5 &> log/DISN_fscore_2.5.log &
nohup python -u test/test_iou.py --img_feat_twostream --view_num 24 --log_dir checkpoint/{your training checkpoint dir} --cal_dir checkpoint/{your training checkpoint dir}/test_objs/65_0.0 --category all --dim 110 &> DISN_iou_all.log &