rishikksh20 / HiFi-GAN

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
MIT License
81 stars 20 forks source link

Load pre-trained model fails #4

Open ghost opened 3 years ago

ghost commented 3 years ago

Got this with your trained model:

Traceback (most recent call last):
  File "inference.py", line 60, in <module>
    main(args)
  File "inference.py", line 23, in main
    model.load_state_dict(checkpoint['model_g'])
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1052, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Generator:
        Missing key(s) in state_dict: "generator.1.weight_g", "generator.1.weight_v", "generator.4.weight_g",

So I wrote this: model.load_state_dict(checkpoint['model_g'], strict=False)

But then new problem:

python inference.py
Traceback (most recent call last):
  File "inference.py", line 60, in <module>
    main(args)
  File "inference.py", line 31, in main
    audio = model.inference(mel)
  File "/root/sharedfolder/dev/HiFi-GAN/model/generator.py", line 129, in inference
    zero = torch.full((1, self.mel_channel, 10), -11.5129).to(mel.device)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
 type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'Generator' object has no attribute 'mel_channel'

Can help?

rishikksh20 commented 3 years ago

@paklau99988 wait it is still in Work in progress, although you can run checkpoint on this https://github.com/rishikksh20/HiFi-GAN/tree/d044dbcdf799f0fdfbfc1920e57e95ac6a05f91b tree commit.

ghost commented 3 years ago

Ok thanks you, Is make train LJspeech or VCTK?

rishikksh20 commented 3 years ago

Ljspeech

ghost commented 3 years ago

Thanks you, is work now inference. How make load pre trained for fine tunning?

ghost commented 3 years ago

Is try load chekcpoint ... for FT

2021-02-25 06:35:11,174 - INFO - Resuming from checkpoint: first_7c049f9_0280.pt
Traceback (most recent call last):
  File "trainer.py", line 52, in <module>
    train(args, pt_dir, args.checkpoint_path, trainloader, valloader, writer, logger, hp, hp_str)
  File "/root/sharedfolder/dev/deep/HiFi-GAN/utils/train.py", line 55, in train
    optim_d.load_state_dict(checkpoint['optim_d'])
  File "/opt/conda/lib/python3.7/site-packages/torch/optim/optimizer.py", line 124, in load_state_dict
    raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group