AIS-Bonn / temporal_latticenet

This repository provides the official implementation for the publication "Abstract Flow for Temporal Semantic Segmentation on the Permutohedral Lattice"
MIT License
11 stars 5 forks source link

Can not load pretrained model #1

Closed benemer closed 2 years ago

benemer commented 2 years ago

Hi,

I tried to follow your instructions on how to load and test the provided pre-trained model on SemanticKITTI. However, when running python test_ln.py --dataset semantickitti I get the following error:

Thu May  5 08:59:19 2022
PID:  2437

-------- Using SemanticKitti Dataset --------
Config file:  /workspace/temporal_latticenet/seq_config/lnn_eval_semantic_kitti.cfg
Lattice sigma:  0.6
Sequences: #scans: 4, cloud scope: 3
Features:  reflectance
The predictions will be saved to:  ./predictions
LabelMngr: nr of classes read 26

-------- Model definition --------
Fusion Modules:  ['gru', 'gru', 'aflow', 'aflow']
pointnet layers is  [16 32 64]
adding Early_GRU with nr_output_channels  64
adding Middle_GRU fusion with nr_output_channels  64
adding Bottleneck_AFLOW Module with nr_output_channels  256
AFLOW: Training alpha and beta values
adding LATE_AFLOW Module with nr_output_channels  192
AFLOW: Training alpha and beta values
adding down_resnet_block with nr of filters 64 and with dropout False
adding down_resnet_block with nr of filters 64 and with dropout False
adding bnReluCoarsen which outputs nr of channels  128
adding down_resnet_block with nr of filters 128 and with dropout False
adding down_resnet_block with nr of filters 128 and with dropout False
adding bnReluCoarsen which outputs nr of channels  256
adding bottleneck_resnet_block with nr of filters 256
adding bottleneck_resnet_block with nr of filters 256
adding bottleneck_resnet_block with nr of filters 256
adding bnReluFinefy which outputs nr of channels  128
adding up_resnet_block with nr of filters 256
adding bnReluFinefy which outputs nr of channels  128
adding up_resnet_block with nr of filters 192
adding up_resnet_block with nr of filters 192
  0%|                                                                                                                                                          | 0/20351 [00:00<?, ?it/s]adding stepdown with output of  192
adding stepdown with output of  96
adding bottleneck with output of  8
Loading state dict from  /workspace/temporal_latticenet/pretrained_models/12022022_0014_multi_Kitti_Ref_sigma0.6_typegru-gru-aflow-aflow_frames4_scope3_epoch2.pt
Traceback (most recent call last):
  File "test_ln.py", line 282, in <module>
    run(args.dataset)  
  File "test_ln.py", line 174, in run
    model.load_state_dict(torch.load(model_path))
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1052, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for LNN_SEQ:
    Missing key(s) in state_dict: "point_net_seq.fusion_module.GRU.weight_ih", "point_net_seq.fusion_module.GRU.weight_hh", "point_net_seq.fusion_module.GRU.bias_ih", "point_net_seq.fusion_module.GRU.bias_hh", "point_net_seq.fusion_module.hidden_linear.weight", "point_net_seq.fusion_module.hidden_linear.bias", "recurrent_fusion_modules.0.GRU.weight_ih", "recurrent_fusion_modules.0.GRU.weight_hh", "recurrent_fusion_modules.0.GRU.bias_ih", "recurrent_fusion_modules.0.GRU.bias_hh", "recurrent_fusion_modules.0.hidden_linear.weight", "recurrent_fusion_modules.0.hidden_linear.bias", "recurrent_fusion_modules.1.AFLOW.alpha", "recurrent_fusion_modules.1.AFLOW.beta", "recurrent_fusion_modules.1.AFLOW.weight", "recurrent_fusion_modules.1.AFLOW.bias", "recurrent_fusion_modules.1.linear.weight", "recurrent_fusion_modules.1.linear.bias", "recurrent_fusion_modules.2.AFLOW.alpha", "recurrent_fusion_modules.2.AFLOW.beta", "recurrent_fusion_modules.2.AFLOW.weight", "recurrent_fusion_modules.2.AFLOW.bias", "recurrent_fusion_modules.2.linear.weight", "recurrent_fusion_modules.2.linear.bias", "coarsens_list.0.norm.gn.weight", "coarsens_list.0.norm.gn.bias", "coarsens_list.1.norm.gn.weight", "coarsens_list.1.norm.gn.bias". 
    Unexpected key(s) in state_dict: "middle_fusion_linear.linear.weight", "middle_fusion_linear.linear.bias", "middle_fusion_linear.hidden_linear.weight", "middle_fusion_linear.hidden_linear.bias", "late_fusion_linear.linear.weight", "late_fusion_linear.linear.bias", "late_fusion_linear.hidden_linear.weight", "late_fusion_linear.hidden_linear.bias", "middle_CGA.hidden_linear.weight", "middle_CGA.hidden_linear.bias", "late_CGA.hidden_linear.weight", "late_CGA.hidden_linear.bias", "middle_LSTM.lstm.weight_ih", "middle_LSTM.lstm.weight_hh", "middle_LSTM.lstm.bias_ih", "middle_LSTM.lstm.bias_hh", "middle_LSTM.hidden_linear.weight", "middle_LSTM.hidden_linear.bias", "late_LSTM.lstm.weight_ih", "late_LSTM.lstm.weight_hh", "late_LSTM.lstm.bias_ih", "late_LSTM.lstm.bias_hh", "late_LSTM.hidden_linear.weight", "late_LSTM.hidden_linear.bias", "middle_GRU.GRU.weight_ih", "middle_GRU.GRU.weight_hh", "middle_GRU.GRU.bias_ih", "middle_GRU.GRU.bias_hh", "middle_GRU.hidden_linear.weight", "middle_GRU.hidden_linear.bias", "late_GRU.GRU.weight_ih", "late_GRU.GRU.weight_hh", "late_GRU.GRU.bias_ih", "late_GRU.GRU.bias_hh", "late_GRU.hidden_linear.weight", "late_GRU.hidden_linear.bias", "middle_AFLOW.linear.weight", "middle_AFLOW.linear.bias", "AFLOW.AFLOW.weight", "AFLOW.AFLOW.bias", "AFLOW.linear.weight", "AFLOW.linear.bias", "late_AFLOW.AFLOW.weight", "late_AFLOW.AFLOW.bias", "late_AFLOW.linear.weight", "late_AFLOW.linear.bias", "point_net_seq.early_fusion_linear.linear.weight", "point_net_seq.early_fusion_linear.linear.bias", "point_net_seq.early_fusion_linear.hidden_linear.weight", "point_net_seq.early_fusion_linear.hidden_linear.bias", "point_net_seq.CGA.hidden_linear.weight", "point_net_seq.CGA.hidden_linear.bias", "point_net_seq.AFLOW.linear.weight", "point_net_seq.AFLOW.linear.bias", "point_net_seq.LSTM.lstm.weight_ih", "point_net_seq.LSTM.lstm.weight_hh", "point_net_seq.LSTM.lstm.bias_ih", "point_net_seq.LSTM.lstm.bias_hh", "point_net_seq.LSTM.hidden_linear.weight", "point_net_seq.LSTM.hidden_linear.bias", "point_net_seq.GRU.GRU.weight_ih", "point_net_seq.GRU.GRU.weight_hh", "point_net_seq.GRU.GRU.bias_ih", "point_net_seq.GRU.GRU.bias_hh", "point_net_seq.GRU.hidden_linear.weight", "point_net_seq.GRU.hidden_linear.bias", "resnet_blocks_per_up_lvl_list.0.0.conv1.norm.gn.weight", "resnet_blocks_per_up_lvl_list.0.0.conv1.norm.gn.bias", "resnet_blocks_per_up_lvl_list.0.0.conv1.conv.weight", "resnet_blocks_per_up_lvl_list.0.0.conv2.norm.gn.weight", "resnet_blocks_per_up_lvl_list.0.0.conv2.norm.gn.bias", "resnet_blocks_per_up_lvl_list.0.0.conv2.conv.weight". 
  0%|                                                                                                                                                          | 0/20351 [00:07<?, ?it/s]

Did the pre-trained model use another model structure or another config than the provided one?

Thanks and best regards Benedikt

peerschuett commented 2 years ago

Hi, sorry for the late reply! I made the error of cleaning up too much of the code and therefore the models aren't usable anymore. I am right now in the process of training a new model, that I will upload in the upcoming days for you to try out.

peerschuett commented 2 years ago

So I trained a new model and uploaded it. On my machine training and testing works with it :)