ErlerPhilipp / points2surf

Learning Implicit Surfaces from Point Clouds (ECCV 2020)
https://www.cg.tuwien.ac.at/research/publications/2020/erler-2020-p2s/
MIT License
463 stars 48 forks source link

Missing keys and unexpected keys in state_dict #22

Closed Orig1n closed 2 years ago

Orig1n commented 2 years ago

When running this command bash experiments/eval_p2s_vanilla.sh, I get following output

Random Seed: 40938661
getting information for 100 shapes
models/p2s_vanilla_model_149.pth
Traceback (most recent call last):
  File "/home/origin/codes/points2surf-master/full_eval.py", line 81, in <module>
    full_eval(opt=points_to_surf_eval.parse_arguments())
  File "/home/origin/codes/points2surf-master/full_eval.py", line 46, in full_eval
    points_to_surf_eval.points_to_surf_eval(opt)
  File "/home/origin/codes/points2surf-master/source/points_to_surf_eval.py", line 338, in points_to_surf_eval
    p2s_model = make_regressor(train_opt=train_opt, pred_dim=pred_dim, model_filename=model_filename, device=device)
  File "/home/origin/codes/points2surf-master/source/points_to_surf_eval.py", line 172, in make_regressor
    p2s_model.load_state_dict(state)
  File "/home/origin/anaconda3/envs/p2s/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DataParallel:
    Missing key(s) in state_dict: "module.feat_global.stn1.conv1.weight", "module.feat_global.stn1.conv1.bias", "module.feat_global.stn1.conv2.weight", "module.feat_global.stn1.conv2.bias", "module.feat_global.stn1.conv3.weight", "module.feat_global.stn1.conv3.bias", "module.feat_global.stn1.fc1.weight", "module.feat_global.stn1.fc1.bias", "module.feat_global.stn1.fc2.weight", "module.feat_global.stn1.fc2.bias", "module.feat_global.stn1.fc3.weight", "module.feat_global.stn1.fc3.bias", "module.feat_global.stn1.bn1.weight", "module.feat_global.stn1.bn1.bias", "module.feat_global.stn1.bn1.running_mean", "module.feat_global.stn1.bn1.running_var", "module.feat_global.stn1.bn2.weight", "module.feat_global.stn1.bn2.bias", "module.feat_global.stn1.bn2.running_mean", "module.feat_global.stn1.bn2.running_var", "module.feat_global.stn1.bn3.weight", "module.feat_global.stn1.bn3.bias", "module.feat_global.stn1.bn3.running_mean", "module.feat_global.stn1.bn3.running_var", "module.feat_global.stn1.bn4.weight", "module.feat_global.stn1.bn4.bias", "module.feat_global.stn1.bn4.running_mean", "module.feat_global.stn1.bn4.running_var", "module.feat_global.stn1.bn5.weight", "module.feat_global.stn1.bn5.bias", "module.feat_global.stn1.bn5.running_mean", "module.feat_global.stn1.bn5.running_var". 
    Unexpected key(s) in state_dict: "module.point_stn.conv1.weight", "module.point_stn.conv1.bias", "module.point_stn.conv2.weight", "module.point_stn.conv2.bias", "module.point_stn.conv3.weight", "module.point_stn.conv3.bias", "module.point_stn.fc1.weight", "module.point_stn.fc1.bias", "module.point_stn.fc2.weight", "module.point_stn.fc2.bias", "module.point_stn.fc3.weight", "module.point_stn.fc3.bias", "module.point_stn.bn1.weight", "module.point_stn.bn1.bias", "module.point_stn.bn1.running_mean", "module.point_stn.bn1.running_var", "module.point_stn.bn1.num_batches_tracked", "module.point_stn.bn2.weight", "module.point_stn.bn2.bias", "module.point_stn.bn2.running_mean", "module.point_stn.bn2.running_var", "module.point_stn.bn2.num_batches_tracked", "module.point_stn.bn3.weight", "module.point_stn.bn3.bias", "module.point_stn.bn3.running_mean", "module.point_stn.bn3.running_var", "module.point_stn.bn3.num_batches_tracked", "module.point_stn.bn4.weight", "module.point_stn.bn4.bias", "module.point_stn.bn4.running_mean", "module.point_stn.bn4.running_var", "module.point_stn.bn4.num_batches_tracked", "module.point_stn.bn5.weight", "module.point_stn.bn5.bias", "module.point_stn.bn5.running_mean", "module.point_stn.bn5.running_var", "module.point_stn.bn5.num_batches_tracked". 
ErlerPhilipp commented 2 years ago

Hi @Orig1n

Apparently, the saved model doesn't match the code in the evaluation. Is this model self-trained or from the official uploads?

Orig1n commented 2 years ago

I think it's from official uploads. I followed the codes in [README.md]() and it works when I run bash experiments/eval_p2s_small_radius.sh

ErlerPhilipp commented 2 years ago

ok that is bad. looks like i trained the vanilla model with old code (which is in a private repo) and some other models with the current code. i would need to spend some time to fix it. strange that no one reported it before.

can you work with the max model instead?

Orig1n commented 2 years ago

OK, thanks for help.