declare-lab / MIME

This repository contains PyTorch implementations of the models from the paper An Empirical Study MIME: MIMicking Emotions for Empathetic Response Generation.
MIT License
43 stars 11 forks source link

Can't download the weights of the model #4

Open nastya236 opened 2 years ago

nastya236 commented 2 years ago

I want to evaluate the model (I downloaded weights from google disk) , however unfortunately during testing the following error occurs:

RuntimeError: Error(s) in loading state_dict for Train_MIME:
    Missing key(s) in state_dict: "encoder.enc.multi_head_attention.query_linear.weight", 
"encoder.enc.multi_head_attention.key_linear.weight", "encoder.enc.multi_head_attention.value_linear.weight", 
"encoder.enc.multi_head_attention.output_linear.weight", "encoder.enc.positionwise_feed_forward.layers.0.conv.weight", 

"encoder.enc.positionwise_feed_forward.layers.0.conv.bias", "encoder.enc.positionwise_feed_forward.layers.1.conv.weight",
 "encoder.enc.positionwise_feed_forward.layers.1.conv.bias", "encoder.enc.layer_norm_mha.gamma", 
"encoder.enc.layer_norm_mha.beta", "encoder.enc.layer_norm_ffn.gamma", "encoder.enc.layer_norm_ffn.beta", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_1.enc.enc.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_1.enc.enc.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_1.enc.enc.layer_norm_mha.gamma", "emotion_input_encoder_1.enc.enc.layer_norm_mha.beta", 
"emotion_input_encoder_1.enc.enc.layer_norm_ffn.gamma", "emotion_input_encoder_1.enc.enc.layer_norm_ffn.beta", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_2.enc.enc.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_2.enc.enc.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_2.enc.enc.layer_norm_mha.gamma", "emotion_input_encoder_2.enc.enc.layer_norm_mha.beta", 
"emotion_input_encoder_2.enc.enc.layer_norm_ffn.gamma", "emotion_input_encoder_2.enc.enc.layer_norm_ffn.beta". 

    Unexpected key(s) in state_dict: "encoder.enc.0.multi_head_attention.query_linear.weight", 
"encoder.enc.0.multi_head_attention.key_linear.weight", "encoder.enc.0.multi_head_attention.value_linear.weight", 
"encoder.enc.0.multi_head_attention.output_linear.weight", "encoder.enc.0.positionwise_feed_forward.layers.0.conv.weight", 
"encoder.enc.0.positionwise_feed_forward.layers.0.conv.bias", 
"encoder.enc.0.positionwise_feed_forward.layers.1.conv.weight", 
"encoder.enc.0.positionwise_feed_forward.layers.1.conv.bias", "encoder.enc.0.layer_norm_mha.gamma", 
"encoder.enc.0.layer_norm_mha.beta", "encoder.enc.0.layer_norm_ffn.gamma", "encoder.enc.0.layer_norm_ffn.beta", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_1.enc.enc.0.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_1.enc.enc.0.layer_norm_mha.gamma", "emotion_input_encoder_1.enc.enc.0.layer_norm_mha.beta", 
"emotion_input_encoder_1.enc.enc.0.layer_norm_ffn.gamma", "emotion_input_encoder_1.enc.enc.0.layer_norm_ffn.beta", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.query_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.key_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.value_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.multi_head_attention.output_linear.weight", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.0.conv.weight", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.0.conv.bias", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.1.conv.weight", 
"emotion_input_encoder_2.enc.enc.0.positionwise_feed_forward.layers.1.conv.bias", 
"emotion_input_encoder_2.enc.enc.0.layer_norm_mha.gamma", "emotion_input_encoder_2.enc.enc.0.layer_norm_mha.beta",
 "emotion_input_encoder_2.enc.enc.0.layer_norm_ffn.gamma", "emotion_input_encoder_2.enc.enc.0.layer_norm_ffn.beta". 

I rewrote the saved_model dict in the following way: enc.0 -> enc That works for me. Is it correct way for downloading model's weights?

Thank you for your help, Anastasiia