Train the model according to the tutorial provided by git(LJSpeech), I encountered this problem during training. shown below, the left side is the result of pre-training, and the right side is the result of my training. Have you encountered it before?
Before this problem, I also encountered a problem like this(https://github.com/ming024/FastSpeech2/issues/105), so I Annotation "model = nn.DataParallel(model)" in train.py and modify
"
torch.save(
{
"model": model.module.state_dict(),
"optimizer": optimizer._optimizer.state_dict(),
}
"
to
"
torch.save(
{
"model": model.state_dict(),
"optimizer": optimizer._optimizer.state_dict(),
}
"
Train the model according to the tutorial provided by git(LJSpeech), I encountered this problem during training. shown below, the left side is the result of pre-training, and the right side is the result of my training. Have you encountered it before?
Before this problem, I also encountered a problem like this(https://github.com/ming024/FastSpeech2/issues/105), so I Annotation "model = nn.DataParallel(model)" in train.py and modify " torch.save( { "model": model.module.state_dict(), "optimizer": optimizer._optimizer.state_dict(), } " to " torch.save( { "model": model.state_dict(), "optimizer": optimizer._optimizer.state_dict(), } "