OpenNMT / OpenNMT-py

Open Source Neural Machine Translation and (Large) Language Models in PyTorch
https://opennmt.net/
MIT License
6.73k stars 2.25k forks source link

Cannot load recurrent encoder-decoder model trained with copy attention #2503

Closed colincwilson closed 10 months ago

colincwilson commented 11 months ago

Thanks very much for continuing to develop and support OpenNMT!

I have trained an encoder-decoder LSTM model with copy attention (share_vocab: true, copy_attn: true) using a fresh and recent installation (OpenNMT-py 3.4.3, PyTorch 2.1.0, Python 3.11.6, CPU only). All seems to go fine with training / validation. However, when I try to load the model from a checkpoint with onmt_translate, this error occurs:

.../opennmt/lib/python3.11/site-packages/onmt/models/model.py", line 130, in load_state_dict raise ValueError( ValueError: Missing key in checkpoint: generator.linear.weight

After experimenting a bit with the model.load_state_dict() function and the saved checkpoint, I believe that the error arises because the function is expecting a module 'generator.linear' with parameter 'weight' (also 'bias') , but the checkpoint has instead a module 'generator' with parameters 'linear.weight' (and 'linear.bias'). Model loading succeeds when the training config file is identical except that copy_attn is set to false.

vince62s commented 11 months ago

yeah very few use copy_attn since the transformer. what is your use case because I was planning to drop this feature. anyway I'll at this next week.

colincwilson commented 11 months ago

Thanks! I suspected that this functionality might be phasing out. The use case is morphological reinflection (for example, https://sigmorphon.github.io/sharedtasks/), for which rnn models are still fairly competitive and copy attention is conceptually motivated and empirically useful. I believe I was able to load the model after making a small addition to model.load_state_dict. Would you want a PR for that? Or maybe you have another quick fix in mind on the checkpoint save side, or want to go ahead with the plan of removing copy attention for rnns altogether.

vince62s commented 11 months ago

do you see thoses two in the checkpoint ? https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/modules/copy_generator.py#L69-L70

colincwilson commented 11 months ago

Yes, thanks, they are both there in the checkpoint.

model = torch.load('model_step_100.pt')
model['generator'].keys()
> odict_keys(['linear.weight', 'linear.bias', 'linear_copy.weight', 'linear_copy.bias'])
vince62s commented 11 months ago

Can you git pull and tell me if it works for you?

colincwilson commented 11 months ago

Thanks! Will do later this evening.

Update: Still getting the same error, unfortunately. I believe that in your edit, near lines 143-144 of model.py, keyname should be set equal to (name.removeprefix("generator.") +"."+ param_name ...

vince62s commented 10 months ago

closing. Reopen if needed.