mayank-git-hub / ETE-Speech-Recognition

Implementation of Hybrid CTC/Attention Architecture for End-to-End Speech Recognition in pure python and PyTorch
Apache License 2.0
26 stars 2 forks source link

Pretrained model #5

Closed TheisTrue closed 4 years ago

TheisTrue commented 4 years ago

Hi. The training is very time-consuming, so could provide the pretrained model, if it is available? Thank you very much!

mayank-git-hub commented 4 years ago

When I ran the code, I had a different task in my hand in which I did not need a really good model. Hence I did not train the model to good results. You can find the pre-trained model at - https://drive.google.com/open?id=1KOpQLAnYfiVb2FdRPo002ToaatOtV8Ww.

Further I will suggest you to download the pre-trained model from ESPNET and with slight modifications and fine-tuning, you should be able to get good results.

TheisTrue commented 4 years ago

I used this model to test, and the following exception occurred: File "/home/Documents/ETE/test.py", line 105, in main model.load_state_dict(checkpoint['model']) File "/home/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for E2E:

mayank-git-hub commented 4 years ago

Can you send the entire log of the exception? I believe the log you mentioned is truncated.

TheisTrue commented 4 years ago

log: Starting Testing Traceback (most recent call last): File "main.py", line 61, in main() File "/home/.local/lib/python3.7/site-packages/click/core.py", line 722, in call return self.main(args, kwargs) File "/home/.local/lib/python3.7/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/home/.local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/.local/lib/python3.7/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, ctx.params) File "/home/.local/lib/python3.7/site-packages/click/core.py", line 535, in invoke return callback(args, **kwargs) File "main.py", line 42, in test test.main() File "/home/Documents/ETE/test.py", line 105, in main model.load_state_dict(checkpoint['model']) File "/home/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for E2E: Missing key(s) in state_dict: "encoder.embed.conv.0.weight", "encoder.embed.conv.0.bias", "encoder.embed.conv.2.weight", "encoder.embed.conv.2.bias", "encoder.embed.out.0.weight", "encoder.embed.out.0.bias", "encoder.embed.out.1.pe", "encoder.encoders.0.self_attn.linear_q.weight", "encoder.encoders.0.self_attn.linear_q.bias", "encoder.encoders.0.self_attn.linear_k.weight", "encoder.encoders.0.self_attn.linear_k.bias", "encoder.encoders.0.self_attn.linear_v.weight", "encoder.encoders.0.self_attn.linear_v.bias", "encoder.encoders.0.self_attn.linear_out.weight", "encoder.encoders.0.self_attn.linear_out.bias", "encoder.encoders.0.feed_forward.w_1.weight", "encoder.encoders.0.feed_forward.w_1.bias", "encoder.encoders.0.feed_forward.w_2.weight", "encoder.encoders.0.feed_forward.w_2.bias", "encoder.encoders.0.norm1.weight", "encoder.encoders.0.norm1.bias", "encoder.encoders.0.norm2.weight", "encoder.encoders.0.norm2.bias", "encoder.encoders.1.self_attn.linear_q.weight", "encoder.encoders.1.self_attn.linear_q.bias", "encoder.encoders.1.self_attn.linear_k.weight", "encoder.encoders.1.self_attn.linear_k.bias", "encoder.encoders.1.self_attn.linear_v.weight", "encoder.encoders.1.self_attn.linear_v.bias", "encoder.encoders.1.self_attn.linear_out.weight", "encoder.encoders.1.self_attn.linear_out.bias", "encoder.encoders.1.feed_forward.w_1.weight", "encoder.encoders.1.feed_forward.w_1.bias", "encoder.encoders.1.feed_forward.w_2.weight", "encoder.encoders.1.feed_forward.w_2.bias", "encoder.encoders.1.norm1.weight", "encoder.encoders.1.norm1.bias", "encoder.encoders.1.norm2.weight", "encoder.encoders.1.norm2.bias", "encoder.encoders.2.self_attn.linear_q.weight", "encoder.encoders.2.self_attn.linear_q.bias", "encoder.encoders.2.self_attn.linear_k.weight", "encoder.encoders.2.self_attn.linear_k.bias", "encoder.encoders.2.self_attn.linear_v.weight", "encoder.encoders.2.self_attn.linear_v.bias", "encoder.encoders.2.self_attn.linear_out.weight", "encoder.encoders.2.self_attn.linear_out.bias", "encoder.encoders.2.feed_forward.w_1.weight", "encoder.encoders.2.feed_forward.w_1.bias", "encoder.encoders.2.feed_forward.w_2.weight", "encoder.encoders.2.feed_forward.w_2.bias", "encoder.encoders.2.norm1.weight", "encoder.encoders.2.norm1.bias", "encoder.encoders.2.norm2.weight", "encoder.encoders.2.norm2.bias", "encoder.encoders.3.self_attn.linear_q.weight", "encoder.encoders.3.self_attn.linear_q.bias", "encoder.encoders.3.self_attn.linear_k.weight", "encoder.encoders.3.self_attn.linear_k.bias", "encoder.encoders.3.self_attn.linear_v.weight", "encoder.encoders.3.self_attn.linear_v.bias", "encoder.encoders.3.self_attn.linear_out.weight", "encoder.encoders.3.self_attn.linear_out.bias", "encoder.encoders.3.feed_forward.w_1.weight", "encoder.encoders.3.feed_forward.w_1.bias", "encoder.encoders.3.feed_forward.w_2.weight", "encoder.encoders.3.feed_forward.w_2.bias", "encoder.encoders.3.norm1.weight", "encoder.encoders.3.norm1.bias", "encoder.encoders.3.norm2.weight", "encoder.encoders.3.norm2.bias", "encoder.encoders.4.self_attn.linear_q.weight", "encoder.encoders.4.self_attn.linear_q.bias", "encoder.encoders.4.self_attn.linear_k.weight", "encoder.encoders.4.self_attn.linear_k.bias", "encoder.encoders.4.self_attn.linear_v.weight", "encoder.encoders.4.self_attn.linear_v.bias", "encoder.encoders.4.self_attn.linear_out.weight", "encoder.encoders.4.self_attn.linear_out.bias", "encoder.encoders.4.feed_forward.w_1.weight", "encoder.encoders.4.feed_forward.w_1.bias", "encoder.encoders.4.feed_forward.w_2.weight", "encoder.encoders.4.feed_forward.w_2.bias", "encoder.encoders.4.norm1.weight", "encoder.encoders.4.norm1.bias", "encoder.encoders.4.norm2.weight", "encoder.encoders.4.norm2.bias", "encoder.encoders.5.self_attn.linear_q.weight", "encoder.encoders.5.self_attn.linear_q.bias", "encoder.encoders.5.self_attn.linear_k.weight", "encoder.encoders.5.self_attn.linear_k.bias", "encoder.encoders.5.self_attn.linear_v.weight", "encoder.encoders.5.self_attn.linear_v.bias", "encoder.encoders.5.self_attn.linear_out.weight", "encoder.encoders.5.self_attn.linear_out.bias", "encoder.encoders.5.feed_forward.w_1.weight", "encoder.encoders.5.feed_forward.w_1.bias", "encoder.encoders.5.feed_forward.w_2.weight", "encoder.encoders.5.feed_forward.w_2.bias", "encoder.encoders.5.norm1.weight", "encoder.encoders.5.norm1.bias", "encoder.encoders.5.norm2.weight", "encoder.encoders.5.norm2.bias", "encoder.encoders.6.self_attn.linear_q.weight", "encoder.encoders.6.self_attn.linear_q.bias", "encoder.encoders.6.self_attn.linear_k.weight", "encoder.encoders.6.self_attn.linear_k.bias", "encoder.encoders.6.self_attn.linear_v.weight", "encoder.encoders.6.self_attn.linear_v.bias", "encoder.encoders.6.self_attn.linear_out.weight", "encoder.encoders.6.self_attn.linear_out.bias", "encoder.encoders.6.feed_forward.w_1.weight", "encoder.encoders.6.feed_forward.w_1.bias", "encoder.encoders.6.feed_forward.w_2.weight", "encoder.encoders.6.feed_forward.w_2.bias", "encoder.encoders.6.norm1.weight", "encoder.encoders.6.norm1.bias", "encoder.encoders.6.norm2.weight", "encoder.encoders.6.norm2.bias", "encoder.encoders.7.self_attn.linear_q.weight", "encoder.encoders.7.self_attn.linear_q.bias", "encoder.encoders.7.self_attn.linear_k.weight", "encoder.encoders.7.self_attn.linear_k.bias", "encoder.encoders.7.self_attn.linear_v.weight", "encoder.encoders.7.self_attn.linear_v.bias", "encoder.encoders.7.self_attn.linear_out.weight", "encoder.encoders.7.self_attn.linear_out.bias", "encoder.encoders.7.feed_forward.w_1.weight", "encoder.encoders.7.feed_forward.w_1.bias", "encoder.encoders.7.feed_forward.w_2.weight", "encoder.encoders.7.feed_forward.w_2.bias", "encoder.encoders.7.norm1.weight", "encoder.encoders.7.norm1.bias", "encoder.encoders.7.norm2.weight", "encoder.encoders.7.norm2.bias", "encoder.encoders.8.self_attn.linear_q.weight", "encoder.encoders.8.self_attn.linear_q.bias", "encoder.encoders.8.self_attn.linear_k.weight", "encoder.encoders.8.self_attn.linear_k.bias", "encoder.encoders.8.self_attn.linear_v.weight", "encoder.encoders.8.self_attn.linear_v.bias", "encoder.encoders.8.self_attn.linear_out.weight", "encoder.encoders.8.self_attn.linear_out.bias", "encoder.encoders.8.feed_forward.w_1.weight", "encoder.encoders.8.feed_forward.w_1.bias", "encoder.encoders.8.feed_forward.w_2.weight", "encoder.encoders.8.feed_forward.w_2.bias", "encoder.encoders.8.norm1.weight", "encoder.encoders.8.norm1.bias", "encoder.encoders.8.norm2.weight", "encoder.encoders.8.norm2.bias", "encoder.encoders.9.self_attn.linear_q.weight", "encoder.encoders.9.self_attn.linear_q.bias", "encoder.encoders.9.self_attn.linear_k.weight", "encoder.encoders.9.self_attn.linear_k.bias", "encoder.encoders.9.self_attn.linear_v.weight", "encoder.encoders.9.self_attn.linear_v.bias", "encoder.encoders.9.self_attn.linear_out.weight", "encoder.encoders.9.self_attn.linear_out.bias", "encoder.encoders.9.feed_forward.w_1.weight", "encoder.encoders.9.feed_forward.w_1.bias", "encoder.encoders.9.feed_forward.w_2.weight", "encoder.encoders.9.feed_forward.w_2.bias", "encoder.encoders.9.norm1.weight", "encoder.encoders.9.norm1.bias", "encoder.encoders.9.norm2.weight", "encoder.encoders.9.norm2.bias", "encoder.encoders.10.self_attn.linear_q.weight", "encoder.encoders.10.self_attn.linear_q.bias", "encoder.encoders.10.self_attn.linear_k.weight", "encoder.encoders.10.self_attn.linear_k.bias", "encoder.encoders.10.self_attn.linear_v.weight", "encoder.encoders.10.self_attn.linear_v.bias", "encoder.encoders.10.self_attn.linear_out.weight", "encoder.encoders.10.self_attn.linear_out.bias", "encoder.encoders.10.feed_forward.w_1.weight", "encoder.encoders.10.feed_forward.w_1.bias", "encoder.encoders.10.feed_forward.w_2.weight", "encoder.encoders.10.feed_forward.w_2.bias", "encoder.encoders.10.norm1.weight", "encoder.encoders.10.norm1.bias", "encoder.encoders.10.norm2.weight", "encoder.encoders.10.norm2.bias", "encoder.encoders.11.self_attn.linear_q.weight", "encoder.encoders.11.self_attn.linear_q.bias", "encoder.encoders.11.self_attn.linear_k.weight", "encoder.encoders.11.self_attn.linear_k.bias", "encoder.encoders.11.self_attn.linear_v.weight", "encoder.encoders.11.self_attn.linear_v.bias", "encoder.encoders.11.self_attn.linear_out.weight", "encoder.encoders.11.self_attn.linear_out.bias", "encoder.encoders.11.feed_forward.w_1.weight", "encoder.encoders.11.feed_forward.w_1.bias", "encoder.encoders.11.feed_forward.w_2.weight", "encoder.encoders.11.feed_forward.w_2.bias", "encoder.encoders.11.norm1.weight", "encoder.encoders.11.norm1.bias", "encoder.encoders.11.norm2.weight", "encoder.encoders.11.norm2.bias", "encoder.after_norm.weight", "encoder.after_norm.bias", "decoder.embed.0.weight", "decoder.embed.1.pe", "decoder.decoders.0.self_attn.linear_q.weight", "decoder.decoders.0.self_attn.linear_q.bias", "decoder.decoders.0.self_attn.linear_k.weight", "decoder.decoders.0.self_attn.linear_k.bias", "decoder.decoders.0.self_attn.linear_v.weight", "decoder.decoders.0.self_attn.linear_v.bias", "decoder.decoders.0.self_attn.linear_out.weight", "decoder.decoders.0.self_attn.linear_out.bias", "decoder.decoders.0.src_attn.linear_q.weight", "decoder.decoders.0.src_attn.linear_q.bias", "decoder.decoders.0.src_attn.linear_k.weight", "decoder.decoders.0.src_attn.linear_k.bias", "decoder.decoders.0.src_attn.linear_v.weight", "decoder.decoders.0.src_attn.linear_v.bias", "decoder.decoders.0.src_attn.linear_out.weight", "decoder.decoders.0.src_attn.linear_out.bias", "decoder.decoders.0.feed_forward.w_1.weight", "decoder.decoders.0.feed_forward.w_1.bias", "decoder.decoders.0.feed_forward.w_2.weight", "decoder.decoders.0.feed_forward.w_2.bias", "decoder.decoders.0.norm1.weight", "decoder.decoders.0.norm1.bias", "decoder.decoders.0.norm2.weight", "decoder.decoders.0.norm2.bias", "decoder.decoders.0.norm3.weight", "decoder.decoders.0.norm3.bias", "decoder.decoders.1.self_attn.linear_q.weight", "decoder.decoders.1.self_attn.linear_q.bias", "decoder.decoders.1.self_attn.linear_k.weight", "decoder.decoders.1.self_attn.linear_k.bias", "decoder.decoders.1.self_attn.linear_v.weight", "decoder.decoders.1.self_attn.linear_v.bias", "decoder.decoders.1.self_attn.linear_out.weight", "decoder.decoders.1.self_attn.linear_out.bias", "decoder.decoders.1.src_attn.linear_q.weight", "decoder.decoders.1.src_attn.linear_q.bias", "decoder.decoders.1.src_attn.linear_k.weight", "decoder.decoders.1.src_attn.linear_k.bias", "decoder.decoders.1.src_attn.linear_v.weight", "decoder.decoders.1.src_attn.linear_v.bias", "decoder.decoders.1.src_attn.linear_out.weight", "decoder.decoders.1.src_attn.linear_out.bias", "decoder.decoders.1.feed_forward.w_1.weight", "decoder.decoders.1.feed_forward.w_1.bias", "decoder.decoders.1.feed_forward.w_2.weight", "decoder.decoders.1.feed_forward.w_2.bias", "decoder.decoders.1.norm1.weight", "decoder.decoders.1.norm1.bias", "decoder.decoders.1.norm2.weight", "decoder.decoders.1.norm2.bias", "decoder.decoders.1.norm3.weight", "decoder.decoders.1.norm3.bias", "decoder.decoders.2.self_attn.linear_q.weight", "decoder.decoders.2.self_attn.linear_q.bias", "decoder.decoders.2.self_attn.linear_k.weight", "decoder.decoders.2.self_attn.linear_k.bias", "decoder.decoders.2.self_attn.linear_v.weight", "decoder.decoders.2.self_attn.linear_v.bias", "decoder.decoders.2.self_attn.linear_out.weight", "decoder.decoders.2.self_attn.linear_out.bias", "decoder.decoders.2.src_attn.linear_q.weight", "decoder.decoders.2.src_attn.linear_q.bias", "decoder.decoders.2.src_attn.linear_k.weight", "decoder.decoders.2.src_attn.linear_k.bias", "decoder.decoders.2.src_attn.linear_v.weight", "decoder.decoders.2.src_attn.linear_v.bias", "decoder.decoders.2.src_attn.linear_out.weight", "decoder.decoders.2.src_attn.linear_out.bias", "decoder.decoders.2.feed_forward.w_1.weight", "decoder.decoders.2.feed_forward.w_1.bias", "decoder.decoders.2.feed_forward.w_2.weight", "decoder.decoders.2.feed_forward.w_2.bias", "decoder.decoders.2.norm1.weight", "decoder.decoders.2.norm1.bias", "decoder.decoders.2.norm2.weight", "decoder.decoders.2.norm2.bias", "decoder.decoders.2.norm3.weight", "decoder.decoders.2.norm3.bias", "decoder.decoders.3.self_attn.linear_q.weight", "decoder.decoders.3.self_attn.linear_q.bias", "decoder.decoders.3.self_attn.linear_k.weight", "decoder.decoders.3.self_attn.linear_k.bias", "decoder.decoders.3.self_attn.linear_v.weight", "decoder.decoders.3.self_attn.linear_v.bias", "decoder.decoders.3.self_attn.linear_out.weight", "decoder.decoders.3.self_attn.linear_out.bias", "decoder.decoders.3.src_attn.linear_q.weight", "decoder.decoders.3.src_attn.linear_q.bias", "decoder.decoders.3.src_attn.linear_k.weight", "decoder.decoders.3.src_attn.linear_k.bias", "decoder.decoders.3.src_attn.linear_v.weight", "decoder.decoders.3.src_attn.linear_v.bias", "decoder.decoders.3.src_attn.linear_out.weight", "decoder.decoders.3.src_attn.linear_out.bias", "decoder.decoders.3.feed_forward.w_1.weight", "decoder.decoders.3.feed_forward.w_1.bias", "decoder.decoders.3.feed_forward.w_2.weight", "decoder.decoders.3.feed_forward.w_2.bias", "decoder.decoders.3.norm1.weight", "decoder.decoders.3.norm1.bias", "decoder.decoders.3.norm2.weight", "decoder.decoders.3.norm2.bias", "decoder.decoders.3.norm3.weight", "decoder.decoders.3.norm3.bias", "decoder.decoders.4.self_attn.linear_q.weight", "decoder.decoders.4.self_attn.linear_q.bias", "decoder.decoders.4.self_attn.linear_k.weight", "decoder.decoders.4.self_attn.linear_k.bias", "decoder.decoders.4.self_attn.linear_v.weight", "decoder.decoders.4.self_attn.linear_v.bias", "decoder.decoders.4.self_attn.linear_out.weight", "decoder.decoders.4.self_attn.linear_out.bias", "decoder.decoders.4.src_attn.linear_q.weight", "decoder.decoders.4.src_attn.linear_q.bias", "decoder.decoders.4.src_attn.linear_k.weight", "decoder.decoders.4.src_attn.linear_k.bias", "decoder.decoders.4.src_attn.linear_v.weight", "decoder.decoders.4.src_attn.linear_v.bias", "decoder.decoders.4.src_attn.linear_out.weight", "decoder.decoders.4.src_attn.linear_out.bias", "decoder.decoders.4.feed_forward.w_1.weight", "decoder.decoders.4.feed_forward.w_1.bias", "decoder.decoders.4.feed_forward.w_2.weight", "decoder.decoders.4.feed_forward.w_2.bias", "decoder.decoders.4.norm1.weight", "decoder.decoders.4.norm1.bias", "decoder.decoders.4.norm2.weight", "decoder.decoders.4.norm2.bias", "decoder.decoders.4.norm3.weight", "decoder.decoders.4.norm3.bias", "decoder.decoders.5.self_attn.linear_q.weight", "decoder.decoders.5.self_attn.linear_q.bias", "decoder.decoders.5.self_attn.linear_k.weight", "decoder.decoders.5.self_attn.linear_k.bias", "decoder.decoders.5.self_attn.linear_v.weight", "decoder.decoders.5.self_attn.linear_v.bias", "decoder.decoders.5.self_attn.linear_out.weight", "decoder.decoders.5.self_attn.linear_out.bias", "decoder.decoders.5.src_attn.linear_q.weight", "decoder.decoders.5.src_attn.linear_q.bias", "decoder.decoders.5.src_attn.linear_k.weight", "decoder.decoders.5.src_attn.linear_k.bias", "decoder.decoders.5.src_attn.linear_v.weight", "decoder.decoders.5.src_attn.linear_v.bias", "decoder.decoders.5.src_attn.linear_out.weight", "decoder.decoders.5.src_attn.linear_out.bias", "decoder.decoders.5.feed_forward.w_1.weight", "decoder.decoders.5.feed_forward.w_1.bias", "decoder.decoders.5.feed_forward.w_2.weight", "decoder.decoders.5.feed_forward.w_2.bias", "decoder.decoders.5.norm1.weight", "decoder.decoders.5.norm1.bias", "decoder.decoders.5.norm2.weight", "decoder.decoders.5.norm2.bias", "decoder.decoders.5.norm3.weight", "decoder.decoders.5.norm3.bias", "decoder.after_norm.weight", "decoder.after_norm.bias", "decoder.output_layer.weight", "decoder.output_layer.bias", "ctc.ctc_lo.weight", "ctc.ctc_lo.bias". Unexpected key(s) in state_dict: "module.encoder.embed.conv.0.weight", "module.encoder.embed.conv.0.bias", "module.encoder.embed.conv.2.weight", "module.encoder.embed.conv.2.bias", "module.encoder.embed.out.0.weight", "module.encoder.embed.out.0.bias", "module.encoder.embed.out.1.pe", "module.encoder.encoders.0.self_attn.linear_q.weight", "module.encoder.encoders.0.self_attn.linear_q.bias", "module.encoder.encoders.0.self_attn.linear_k.weight", "module.encoder.encoders.0.self_attn.linear_k.bias", "module.encoder.encoders.0.self_attn.linear_v.weight", "module.encoder.encoders.0.self_attn.linear_v.bias", "module.encoder.encoders.0.self_attn.linear_out.weight", "module.encoder.encoders.0.self_attn.linear_out.bias", "module.encoder.encoders.0.feed_forward.w_1.weight", "module.encoder.encoders.0.feed_forward.w_1.bias", "module.encoder.encoders.0.feed_forward.w_2.weight", "module.encoder.encoders.0.feed_forward.w_2.bias", "module.encoder.encoders.0.norm1.weight", "module.encoder.encoders.0.norm1.bias", "module.encoder.encoders.0.norm2.weight", "module.encoder.encoders.0.norm2.bias", "module.encoder.encoders.1.self_attn.linear_q.weight", "module.encoder.encoders.1.self_attn.linear_q.bias", "module.encoder.encoders.1.self_attn.linear_k.weight", "module.encoder.encoders.1.self_attn.linear_k.bias", "module.encoder.encoders.1.self_attn.linear_v.weight", "module.encoder.encoders.1.self_attn.linear_v.bias", "module.encoder.encoders.1.self_attn.linear_out.weight", "module.encoder.encoders.1.self_attn.linear_out.bias", "module.encoder.encoders.1.feed_forward.w_1.weight", "module.encoder.encoders.1.feed_forward.w_1.bias", "module.encoder.encoders.1.feed_forward.w_2.weight", "module.encoder.encoders.1.feed_forward.w_2.bias", "module.encoder.encoders.1.norm1.weight", "module.encoder.encoders.1.norm1.bias", "module.encoder.encoders.1.norm2.weight", "module.encoder.encoders.1.norm2.bias", "module.encoder.encoders.2.self_attn.linear_q.weight", "module.encoder.encoders.2.self_attn.linear_q.bias", "module.encoder.encoders.2.self_attn.linear_k.weight", "module.encoder.encoders.2.self_attn.linear_k.bias", "module.encoder.encoders.2.self_attn.linear_v.weight", "module.encoder.encoders.2.self_attn.linear_v.bias", "module.encoder.encoders.2.self_attn.linear_out.weight", "module.encoder.encoders.2.self_attn.linear_out.bias", "module.encoder.encoders.2.feed_forward.w_1.weight", "module.encoder.encoders.2.feed_forward.w_1.bias", "module.encoder.encoders.2.feed_forward.w_2.weight", "module.encoder.encoders.2.feed_forward.w_2.bias", "module.encoder.encoders.2.norm1.weight", "module.encoder.encoders.2.norm1.bias", "module.encoder.encoders.2.norm2.weight", "module.encoder.encoders.2.norm2.bias", "module.encoder.encoders.3.self_attn.linear_q.weight", "module.encoder.encoders.3.self_attn.linear_q.bias", "module.encoder.encoders.3.self_attn.linear_k.weight", "module.encoder.encoders.3.self_attn.linear_k.bias", "module.encoder.encoders.3.self_attn.linear_v.weight", "module.encoder.encoders.3.self_attn.linear_v.bias", "module.encoder.encoders.3.self_attn.linear_out.weight", "module.encoder.encoders.3.self_attn.linear_out.bias", "module.encoder.encoders.3.feed_forward.w_1.weight", "module.encoder.encoders.3.feed_forward.w_1.bias", "module.encoder.encoders.3.feed_forward.w_2.weight", "module.encoder.encoders.3.feed_forward.w_2.bias", "module.encoder.encoders.3.norm1.weight", "module.encoder.encoders.3.norm1.bias", "module.encoder.encoders.3.norm2.weight", "module.encoder.encoders.3.norm2.bias", "module.encoder.encoders.4.self_attn.linear_q.weight", "module.encoder.encoders.4.self_attn.linear_q.bias", "module.encoder.encoders.4.self_attn.linear_k.weight", "module.encoder.encoders.4.self_attn.linear_k.bias", "module.encoder.encoders.4.self_attn.linear_v.weight", "module.encoder.encoders.4.self_attn.linear_v.bias", "module.encoder.encoders.4.self_attn.linear_out.weight", "module.encoder.encoders.4.self_attn.linear_out.bias", "module.encoder.encoders.4.feed_forward.w_1.weight", "module.encoder.encoders.4.feed_forward.w_1.bias", "module.encoder.encoders.4.feed_forward.w_2.weight", "module.encoder.encoders.4.feed_forward.w_2.bias", "module.encoder.encoders.4.norm1.weight", "module.encoder.encoders.4.norm1.bias", "module.encoder.encoders.4.norm2.weight", "module.encoder.encoders.4.norm2.bias", "module.encoder.encoders.5.self_attn.linear_q.weight", "module.encoder.encoders.5.self_attn.linear_q.bias", "module.encoder.encoders.5.self_attn.linear_k.weight", "module.encoder.encoders.5.self_attn.linear_k.bias", "module.encoder.encoders.5.self_attn.linear_v.weight", "module.encoder.encoders.5.self_attn.linear_v.bias", "module.encoder.encoders.5.self_attn.linear_out.weight", "module.encoder.encoders.5.self_attn.linear_out.bias", "module.encoder.encoders.5.feed_forward.w_1.weight", "module.encoder.encoders.5.feed_forward.w_1.bias", "module.encoder.encoders.5.feed_forward.w_2.weight", "module.encoder.encoders.5.feed_forward.w_2.bias", "module.encoder.encoders.5.norm1.weight", "module.encoder.encoders.5.norm1.bias", "module.encoder.encoders.5.norm2.weight", "module.encoder.encoders.5.norm2.bias", "module.encoder.encoders.6.self_attn.linear_q.weight", "module.encoder.encoders.6.self_attn.linear_q.bias", "module.encoder.encoders.6.self_attn.linear_k.weight", "module.encoder.encoders.6.self_attn.linear_k.bias", "module.encoder.encoders.6.self_attn.linear_v.weight", "module.encoder.encoders.6.self_attn.linear_v.bias", "module.encoder.encoders.6.self_attn.linear_out.weight", "module.encoder.encoders.6.self_attn.linear_out.bias", "module.encoder.encoders.6.feed_forward.w_1.weight", "module.encoder.encoders.6.feed_forward.w_1.bias", "module.encoder.encoders.6.feed_forward.w_2.weight", "module.encoder.encoders.6.feed_forward.w_2.bias", "module.encoder.encoders.6.norm1.weight", "module.encoder.encoders.6.norm1.bias", "module.encoder.encoders.6.norm2.weight", "module.encoder.encoders.6.norm2.bias", "module.encoder.encoders.7.self_attn.linear_q.weight", "module.encoder.encoders.7.self_attn.linear_q.bias", "module.encoder.encoders.7.self_attn.linear_k.weight", "module.encoder.encoders.7.self_attn.linear_k.bias", "module.encoder.encoders.7.self_attn.linear_v.weight", "module.encoder.encoders.7.self_attn.linear_v.bias", "module.encoder.encoders.7.self_attn.linear_out.weight", "module.encoder.encoders.7.self_attn.linear_out.bias", "module.encoder.encoders.7.feed_forward.w_1.weight", "module.encoder.encoders.7.feed_forward.w_1.bias", "module.encoder.encoders.7.feed_forward.w_2.weight", "module.encoder.encoders.7.feed_forward.w_2.bias", "module.encoder.encoders.7.norm1.weight", "module.encoder.encoders.7.norm1.bias", "module.encoder.encoders.7.norm2.weight", "module.encoder.encoders.7.norm2.bias", "module.encoder.encoders.8.self_attn.linear_q.weight", "module.encoder.encoders.8.self_attn.linear_q.bias", "module.encoder.encoders.8.self_attn.linear_k.weight", "module.encoder.encoders.8.self_attn.linear_k.bias", "module.encoder.encoders.8.self_attn.linear_v.weight", "module.encoder.encoders.8.self_attn.linear_v.bias", "module.encoder.encoders.8.self_attn.linear_out.weight", "module.encoder.encoders.8.self_attn.linear_out.bias", "module.encoder.encoders.8.feed_forward.w_1.weight", "module.encoder.encoders.8.feed_forward.w_1.bias", "module.encoder.encoders.8.feed_forward.w_2.weight", "module.encoder.encoders.8.feed_forward.w_2.bias", "module.encoder.encoders.8.norm1.weight", "module.encoder.encoders.8.norm1.bias", "module.encoder.encoders.8.norm2.weight", "module.encoder.encoders.8.norm2.bias", "module.encoder.encoders.9.self_attn.linear_q.weight", "module.encoder.encoders.9.self_attn.linear_q.bias", "module.encoder.encoders.9.self_attn.linear_k.weight", "module.encoder.encoders.9.self_attn.linear_k.bias", "module.encoder.encoders.9.self_attn.linear_v.weight", "module.encoder.encoders.9.self_attn.linear_v.bias", "module.encoder.encoders.9.self_attn.linear_out.weight", "module.encoder.encoders.9.self_attn.linear_out.bias", "module.encoder.encoders.9.feed_forward.w_1.weight", "module.encoder.encoders.9.feed_forward.w_1.bias", "module.encoder.encoders.9.feed_forward.w_2.weight", "module.encoder.encoders.9.feed_forward.w_2.bias", "module.encoder.encoders.9.norm1.weight", "module.encoder.encoders.9.norm1.bias", "module.encoder.encoders.9.norm2.weight", "module.encoder.encoders.9.norm2.bias", "module.encoder.encoders.10.self_attn.linear_q.weight", "module.encoder.encoders.10.self_attn.linear_q.bias", "module.encoder.encoders.10.self_attn.linear_k.weight", "module.encoder.encoders.10.self_attn.linear_k.bias", "module.encoder.encoders.10.self_attn.linear_v.weight", "module.encoder.encoders.10.self_attn.linear_v.bias", "module.encoder.encoders.10.self_attn.linear_out.weight", "module.encoder.encoders.10.self_attn.linear_out.bias", "module.encoder.encoders.10.feed_forward.w_1.weight", "module.encoder.encoders.10.feed_forward.w_1.bias", "module.encoder.encoders.10.feed_forward.w_2.weight", "module.encoder.encoders.10.feed_forward.w_2.bias", "module.encoder.encoders.10.norm1.weight", "module.encoder.encoders.10.norm1.bias", "module.encoder.encoders.10.norm2.weight", "module.encoder.encoders.10.norm2.bias", "module.encoder.encoders.11.self_attn.linear_q.weight", "module.encoder.encoders.11.self_attn.linear_q.bias", "module.encoder.encoders.11.self_attn.linear_k.weight", "module.encoder.encoders.11.self_attn.linear_k.bias", "module.encoder.encoders.11.self_attn.linear_v.weight", "module.encoder.encoders.11.self_attn.linear_v.bias", "module.encoder.encoders.11.self_attn.linear_out.weight", "module.encoder.encoders.11.self_attn.linear_out.bias", "module.encoder.encoders.11.feed_forward.w_1.weight", "module.encoder.encoders.11.feed_forward.w_1.bias", "module.encoder.encoders.11.feed_forward.w_2.weight", "module.encoder.encoders.11.feed_forward.w_2.bias", "module.encoder.encoders.11.norm1.weight", "module.encoder.encoders.11.norm1.bias", "module.encoder.encoders.11.norm2.weight", "module.encoder.encoders.11.norm2.bias", "module.encoder.after_norm.weight", "module.encoder.after_norm.bias", "module.decoder.embed.0.weight", "module.decoder.embed.1.pe", "module.decoder.decoders.0.self_attn.linear_q.weight", "module.decoder.decoders.0.self_attn.linear_q.bias", "module.decoder.decoders.0.self_attn.linear_k.weight", "module.decoder.decoders.0.self_attn.linear_k.bias", "module.decoder.decoders.0.self_attn.linear_v.weight", "module.decoder.decoders.0.self_attn.linear_v.bias", "module.decoder.decoders.0.self_attn.linear_out.weight", "module.decoder.decoders.0.self_attn.linear_out.bias", "module.decoder.decoders.0.src_attn.linear_q.weight", "module.decoder.decoders.0.src_attn.linear_q.bias", "module.decoder.decoders.0.src_attn.linear_k.weight", "module.decoder.decoders.0.src_attn.linear_k.bias", "module.decoder.decoders.0.src_attn.linear_v.weight", "module.decoder.decoders.0.src_attn.linear_v.bias", "module.decoder.decoders.0.src_attn.linear_out.weight", "module.decoder.decoders.0.src_attn.linear_out.bias", "module.decoder.decoders.0.feed_forward.w_1.weight", "module.decoder.decoders.0.feed_forward.w_1.bias", "module.decoder.decoders.0.feed_forward.w_2.weight", "module.decoder.decoders.0.feed_forward.w_2.bias", "module.decoder.decoders.0.norm1.weight", "module.decoder.decoders.0.norm1.bias", "module.decoder.decoders.0.norm2.weight", "module.decoder.decoders.0.norm2.bias", "module.decoder.decoders.0.norm3.weight", "module.decoder.decoders.0.norm3.bias", "module.decoder.decoders.1.self_attn.linear_q.weight", "module.decoder.decoders.1.self_attn.linear_q.bias", "module.decoder.decoders.1.self_attn.linear_k.weight", "module.decoder.decoders.1.self_attn.linear_k.bias", "module.decoder.decoders.1.self_attn.linear_v.weight", "module.decoder.decoders.1.self_attn.linear_v.bias", "module.decoder.decoders.1.self_attn.linear_out.weight", "module.decoder.decoders.1.self_attn.linear_out.bias", "module.decoder.decoders.1.src_attn.linear_q.weight", "module.decoder.decoders.1.src_attn.linear_q.bias", "module.decoder.decoders.1.src_attn.linear_k.weight", "module.decoder.decoders.1.src_attn.linear_k.bias", "module.decoder.decoders.1.src_attn.linear_v.weight", "module.decoder.decoders.1.src_attn.linear_v.bias", "module.decoder.decoders.1.src_attn.linear_out.weight", "module.decoder.decoders.1.src_attn.linear_out.bias", "module.decoder.decoders.1.feed_forward.w_1.weight", "module.decoder.decoders.1.feed_forward.w_1.bias", "module.decoder.decoders.1.feed_forward.w_2.weight", "module.decoder.decoders.1.feed_forward.w_2.bias", "module.decoder.decoders.1.norm1.weight", "module.decoder.decoders.1.norm1.bias", "module.decoder.decoders.1.norm2.weight", "module.decoder.decoders.1.norm2.bias", "module.decoder.decoders.1.norm3.weight", "module.decoder.decoders.1.norm3.bias", "module.decoder.decoders.2.self_attn.linear_q.weight", "module.decoder.decoders.2.self_attn.linear_q.bias", "module.decoder.decoders.2.self_attn.linear_k.weight", "module.decoder.decoders.2.self_attn.linear_k.bias", "module.decoder.decoders.2.self_attn.linear_v.weight", "module.decoder.decoders.2.self_attn.linear_v.bias", "module.decoder.decoders.2.self_attn.linear_out.weight", "module.decoder.decoders.2.self_attn.linear_out.bias", "module.decoder.decoders.2.src_attn.linear_q.weight", "module.decoder.decoders.2.src_attn.linear_q.bias", "module.decoder.decoders.2.src_attn.linear_k.weight", "module.decoder.decoders.2.src_attn.linear_k.bias", "module.decoder.decoders.2.src_attn.linear_v.weight", "module.decoder.decoders.2.src_attn.linear_v.bias", "module.decoder.decoders.2.src_attn.linear_out.weight", "module.decoder.decoders.2.src_attn.linear_out.bias", "module.decoder.decoders.2.feed_forward.w_1.weight", "module.decoder.decoders.2.feed_forward.w_1.bias", "module.decoder.decoders.2.feed_forward.w_2.weight", "module.decoder.decoders.2.feed_forward.w_2.bias", "module.decoder.decoders.2.norm1.weight", "module.decoder.decoders.2.norm1.bias", "module.decoder.decoders.2.norm2.weight", "module.decoder.decoders.2.norm2.bias", "module.decoder.decoders.2.norm3.weight", "module.decoder.decoders.2.norm3.bias", "module.decoder.decoders.3.self_attn.linear_q.weight", "module.decoder.decoders.3.self_attn.linear_q.bias", "module.decoder.decoders.3.self_attn.linear_k.weight", "module.decoder.decoders.3.self_attn.linear_k.bias", "module.decoder.decoders.3.self_attn.linear_v.weight", "module.decoder.decoders.3.self_attn.linear_v.bias", "module.decoder.decoders.3.self_attn.linear_out.weight", "module.decoder.decoders.3.self_attn.linear_out.bias", "module.decoder.decoders.3.src_attn.linear_q.weight", "module.decoder.decoders.3.src_attn.linear_q.bias", "module.decoder.decoders.3.src_attn.linear_k.weight", "module.decoder.decoders.3.src_attn.linear_k.bias", "module.decoder.decoders.3.src_attn.linear_v.weight", "module.decoder.decoders.3.src_attn.linear_v.bias", "module.decoder.decoders.3.src_attn.linear_out.weight", "module.decoder.decoders.3.src_attn.linear_out.bias", "module.decoder.decoders.3.feed_forward.w_1.weight", "module.decoder.decoders.3.feed_forward.w_1.bias", "module.decoder.decoders.3.feed_forward.w_2.weight", "module.decoder.decoders.3.feed_forward.w_2.bias", "module.decoder.decoders.3.norm1.weight", "module.decoder.decoders.3.norm1.bias", "module.decoder.decoders.3.norm2.weight", "module.decoder.decoders.3.norm2.bias", "module.decoder.decoders.3.norm3.weight", "module.decoder.decoders.3.norm3.bias", "module.decoder.decoders.4.self_attn.linear_q.weight", "module.decoder.decoders.4.self_attn.linear_q.bias", "module.decoder.decoders.4.self_attn.linear_k.weight", "module.decoder.decoders.4.self_attn.linear_k.bias", "module.decoder.decoders.4.self_attn.linear_v.weight", "module.decoder.decoders.4.self_attn.linear_v.bias", "module.decoder.decoders.4.self_attn.linear_out.weight", "module.decoder.decoders.4.self_attn.linear_out.bias", "module.decoder.decoders.4.src_attn.linear_q.weight", "module.decoder.decoders.4.src_attn.linear_q.bias", "module.decoder.decoders.4.src_attn.linear_k.weight", "module.decoder.decoders.4.src_attn.linear_k.bias", "module.decoder.decoders.4.src_attn.linear_v.weight", "module.decoder.decoders.4.src_attn.linear_v.bias", "module.decoder.decoders.4.src_attn.linear_out.weight", "module.decoder.decoders.4.src_attn.linear_out.bias", "module.decoder.decoders.4.feed_forward.w_1.weight", "module.decoder.decoders.4.feed_forward.w_1.bias", "module.decoder.decoders.4.feed_forward.w_2.weight", "module.decoder.decoders.4.feed_forward.w_2.bias", "module.decoder.decoders.4.norm1.weight", "module.decoder.decoders.4.norm1.bias", "module.decoder.decoders.4.norm2.weight", "module.decoder.decoders.4.norm2.bias", "module.decoder.decoders.4.norm3.weight", "module.decoder.decoders.4.norm3.bias", "module.decoder.decoders.5.self_attn.linear_q.weight", "module.decoder.decoders.5.self_attn.linear_q.bias", "module.decoder.decoders.5.self_attn.linear_k.weight", "module.decoder.decoders.5.self_attn.linear_k.bias", "module.decoder.decoders.5.self_attn.linear_v.weight", "module.decoder.decoders.5.self_attn.linear_v.bias", "module.decoder.decoders.5.self_attn.linear_out.weight", "module.decoder.decoders.5.self_attn.linear_out.bias", "module.decoder.decoders.5.src_attn.linear_q.weight", "module.decoder.decoders.5.src_attn.linear_q.bias", "module.decoder.decoders.5.src_attn.linear_k.weight", "module.decoder.decoders.5.src_attn.linear_k.bias", "module.decoder.decoders.5.src_attn.linear_v.weight", "module.decoder.decoders.5.src_attn.linear_v.bias", "module.decoder.decoders.5.src_attn.linear_out.weight", "module.decoder.decoders.5.src_attn.linear_out.bias", "module.decoder.decoders.5.feed_forward.w_1.weight", "module.decoder.decoders.5.feed_forward.w_1.bias", "module.decoder.decoders.5.feed_forward.w_2.weight", "module.decoder.decoders.5.feed_forward.w_2.bias", "module.decoder.decoders.5.norm1.weight", "module.decoder.decoders.5.norm1.bias", "module.decoder.decoders.5.norm2.weight", "module.decoder.decoders.5.norm2.bias", "module.decoder.decoders.5.norm3.weight", "module.decoder.decoders.5.norm3.bias", "module.decoder.after_norm.weight", "module.decoder.after_norm.bias", "module.decoder.output_layer.weight", "module.decoder.output_layer.bias", "module.ctc.ctc_lo.weight", "module.ctc.ctc_lo.bias".

mayank-git-hub commented 4 years ago

I have made the corresponding changes in test.py. I haven't tested it on my machine, but it should work.

The problem is that DataParallel module changes the name of the layers and when you try to load it on PC without the module part, pytorch is not able to map the weights. A simple way to solve this is to remove the "module." part from the keys of the state dictionary

TheisTrue commented 4 years ago

Now can run the code normally. Thank you very much.

mayank-git-hub commented 4 years ago

Glad I could help!