convert.py
to convert .off
files to .mat
files; you can download the ModelNet data from kaggle.Basically I already put the chair
dataset and a trained model as an example in volumetric_data
and outputs
folders. You can directly go to the training or evaluation part. But I still give a complete pipeline here.
volumetric_data
folder from ModelNet. As we use ModelNet instead of ShapeNet here, the results may be inconsistent with the paper.Then cd src
, simply run python main.py
on GPU or CPU. Of course, you need a GPU for training until getting good results. I used one GeForce GTX 1070 in my experiments on 3D models with resolution of 32x32x32. The maximum number of channels of feature map is 256. Because of these, the results may be inconsistent with the paper. You may need a stronger one for higher resolution one 64x64x64 and 512 feature maps.
During training, model weights and some 3D reconstruction images would be also logged to the outputs
folders, respectively, for every model_save_step
number of step in params.py
. You can play with all parameters in params.py
.
python main.py --test=True
to call tester.py
.python -m visdom.server
, then python main.py --test=True --use_visdom=True
.sample_results
folder.outputs
folder. Then run python main.py --test=True --model_name=dcgan_pretrained
. You will find the outputs in the test_outputs
folder within dcgan_pretrained
.