Closed joanise closed 1 month ago
relates to #114
Sure enough, it's the same problem with a different kind of file (checkpoint vs config).
We should have some kind of version number of magic number identifying each type of file we generate/support, and a quick check of that before the Pydantic checking even starts.
Sure enough, it's the same problem with a different kind of file (checkpoint vs config).
We should have some kind of version number of magic number identifying each type of file we generate/support, and a quick check of that before the Pydantic checking even starts.
good idea - bumping this up so that we solve it pre-release of the checkpoints
First attempt, save the type and a version number when calling FastSpeech2.on_save_checkpoint()
then use FastSpeech2.on_load_checkpoint(checkpoint)
and make sure the version and the model's type match what we are expecting.
There's a problem with that approach, pytorch lightning actually instantiates a FastSpeech2
variable which tries to load the wrong config type during __init__()
than pydantic raises and exception, obviously because a lot of fields are wrong.
Looking at pytorch lightning's code, we actually see that it creates the class instance and later tries to call on_load_checkpoint()
.
In the function def _load_state()
here https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/core/saving.py#L117
https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/core/saving.py#L165
obj = instantiator(cls, _cls_kwargs) if instantiator else cls(**_cls_kwargs)
obj.on_load_checkpoint(checkpoint)
Traceback (most recent call last):
File "/fs/hestia_Hnrc/ict/sam037/git/EveryVoice/everyvoice/tests/test_model.py", line 212, in test_wrong_model_type
FastSpeech2.load_from_checkpoint(ckpt_fn)
File "/home/sam037/.conda/envs/EveryVoice.sl/lib/python3.10/site-packages/pytorch_lightning/utilities/model_helpers.py", line 125, in wrapper
return self.method(cls, *args, **kwargs)
File "/home/sam037/.conda/envs/EveryVoice.sl/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1582, in load_from_checkpoint
loaded = _load_from_checkpoint(
File "/home/sam037/.conda/envs/EveryVoice.sl/lib/python3.10/site-packages/pytorch_lightning/core/saving.py", line 91, in _load_from_checkpoint
model = _load_state(cls, checkpoint, strict=strict, **kwargs)
File "/home/sam037/.conda/envs/EveryVoice.sl/lib/python3.10/site-packages/pytorch_lightning/core/saving.py", line 165, in _load_state
obj = instantiator(cls, _cls_kwargs) if instantiator else cls(**_cls_kwargs)
File "/fs/hestia_Hnrc/ict/sam037/git/EveryVoice/everyvoice/model/feature_prediction/FastSpeech2_lightning/fs2/model.py", line 48, in __init__
config = FastSpeech2Config(**config)
File "/fs/hestia_Hnrc/ict/sam037/git/EveryVoice/everyvoice/config/shared_types.py", line 128, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(
Rookie mistake, I gave my vocoder_path to the model_path argument in synthesize. It took me a while to figure that out from this log:
An other example command:
A little bit of friendlier messaging could help the user a lot.