Open ghost opened 3 years ago
@paklau99988 wait it is still in Work in progress, although you can run checkpoint on this https://github.com/rishikksh20/HiFi-GAN/tree/d044dbcdf799f0fdfbfc1920e57e95ac6a05f91b tree commit.
Ok thanks you, Is make train LJspeech or VCTK?
Ljspeech
Thanks you, is work now inference. How make load pre trained for fine tunning?
Is try load chekcpoint ... for FT
2021-02-25 06:35:11,174 - INFO - Resuming from checkpoint: first_7c049f9_0280.pt
Traceback (most recent call last):
File "trainer.py", line 52, in <module>
train(args, pt_dir, args.checkpoint_path, trainloader, valloader, writer, logger, hp, hp_str)
File "/root/sharedfolder/dev/deep/HiFi-GAN/utils/train.py", line 55, in train
optim_d.load_state_dict(checkpoint['optim_d'])
File "/opt/conda/lib/python3.7/site-packages/torch/optim/optimizer.py", line 124, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
Got this with your trained model:
So I wrote this:
model.load_state_dict(checkpoint['model_g'], strict=False)
But then new problem:
Can help?