Open ghost opened 5 years ago
I have get the solution about this, pretrained model without anttention_layer and the final layer name not match the pretained model, just rename it when load checkpoit .
I have get the solution about this, pretrained model without anttention_layer and the final layer name not match the pretained model, just rename it when load checkpoit .
I also encountered the same error, Could you please tell me how you solved it? Could you show me the code? Thx.
I already tried @AI4UXP method but I got some another error instead.
@EricYunzhen, you can clone the old commit from this link https://github.com/eriklindernoren/Action-Recognition/tree/1c3916b3116931b5e3c8d5e51f24e33b9f53d8a3 (copy this link) if you just want to run the demo. But this code exclude the attention and bidirectional LSTM.
I ran into the same issue for a model I trained myself, the model name it looks for in test_on_video.py does not match with the trained model. In this PyTorch link you can find how to rename the trained model name to match: https://discuss.pytorch.org/t/solved-keyerror-unexpected-key-module-encoder-embedding-weight-in-state-dict/1686 (In my case I also changed the code to run in parallel)
Missing key(s) in state_dict: "lstm.lstm.weight_ih_l0_reverse", "lstm.lstm.weight_hh_l0_reverse", "lstm.lstm.bias_ih_l0_reverse", "lstm.lstm.bias_hh_l0_reverse", "output_layers.0.weight", "output_layers.0.bias", "output_layers.1.weight", "output_layers.1.bias", "output_layers.1.running_mean", "output_layers.1.running_var", "output_layers.3.weight", "output_layers.3.bias", "attention_layer.weight", "attention_layer.bias". Unexpected key(s) in state_dict: "lstm.final.0.weight", "lstm.final.0.bias", "lstm.final.1.weight", "lstm.final.1.bias", "lstm.final.1.running_mean", "lstm.final.1.running_var", "lstm.final.1.num_batches_tracked", "lstm.final.3.weight", "lstm.final.3.bias".