Closed jingma-git closed 3 years ago
And the final chamfer are: 0.00000000 0.41338158, for all eval data
It looks like the initialization was bad. Did you initialize with a pretrained sphere? If you did, could you try running with different seeds (e.g. add --seed=1
to the command line for seed number 1 or just --seed=
for random seeds)? I was able to train all models with the default --seed=0
so I'm not exactly sure what happened, but changing the seed should fix this.
The reconstructed image would still look similar because it would still fit to the training image/silhouette but in a bad way. Also, the depth map here is actually bad -- ideally the foreground should be light and the background should be dark.
Hi, thank you very much. I tried using different seeds and ran the experiments again. Still gave me ugly result. Here is the log file using random seed: log_random_seed.txt Here is the log file setting seed=0 for multi-view data: log.txt Here is the log file setting seed=0 for single-view data: log_single_view.txt
I expect multi-view experiment should have similar result as single-view experiments, however chamfer metric on multi-view data is wierd. Hopefully, these information will be helpful.
I also double-checked that I did perform pretraining on shapenet multi-view data. Actually, I also did an experiment on Pascal 3D dataset without pretraining. The final result is similar to that with pretraining excepting it took longer time to converge without pretraining.
Thanks for the feedback. I tried re-training with seed 0, and for some reason found that it couldn't train properly either. (This didn't happen before so I was surprised myself why it happened 😐) I just tried a bunch of different seed numbers and 3 to 6 seems to work for me. Could you try one of them? Apologies for the confusion!
Closing this issue for now, please feel free to reopen if there are further questions!
Hi @chenhsuanlin , thanks for great work with the paper. I ran into the same problem when training on both sigle-view and multi-view data. I used the given command to train the model on seveval categories of the shapenet dataset after pre-training, but the chamfer distances didn't change during the whole process and the traning loss was changing around 0.1 to 0.3. The output .ply file is empty (1kb) while the depth-maps and normal maps are confusing. Sevral seeds have been used but the results were the same. Another problem is that when evaluating the given pretrained model, the .ply file looks fine but the the depth-maps and normal maps are confusing. Here is the screenshot from the visdom page.
Hi @ZijinWu-AIA, it looks like the rendered images you have were from another source. Can you try evaluating on the ShapeNet rendering as described in the README (and making sure you followed all the steps in the README as well)?
Thanks for the quick reply! I double checked my dataset and found I had changed the folders's name running another code. The training and evaluating went well in both muti-view data and single-view data after fixing this problem. Thanks again for this great work!
Hi, I use the following command to train shapenet multiview data (the single-view training works fine)
python3 train.py --model=sdf_srn --yaml=options/shapenet/sdf_srn.yaml --name=chair --data.shapenet.cat=chair --max_epoch=28
but the result seems not good, I trained several times, all experiments on multi-view data give me similar results. Chamfer distance does not change during the whole process, reconstructed image is similar to the ground truth, depth-map is also ok, but predicted normal-map is bad, .ply is empty,