Open ymitiku opened 6 years ago
Hey @mitiku1 thanks for reaching out!
It's awesome that you made the checkpoint inspection as it is very helpful. As one can see, it is clear that the residual_conv1dglu/ component is being added to the scopes during training but not during tests which is causing this whole issue. I have had this issue a couple of times during development and I didn't really know how it went away. So please, consider these two scenarios:
As for now, for a quick temporary solution (that does not require training), you can rename your checkpoint scopes during synthesis by taking off the residual_conv1dglu/ component to ensure the synthesis graph is consistent with the training one.
Thank you for reporting this bug, you answer to my above question will help us solve it for the long run.
@Rayhane-mamah Thanks for your replay.To answer your question
I will explain why I made each change.
Change-1:
I've made some changes to hparam.py (To deal with low memory gpu). If you want to see the changes I may post it here.
Change-2 :
I've changed the --tacotron_train_steps`` and ``--wavenet_train_steps
values of train.py to 200, to reproduce the issue quickly.
Change-3:
I was getting AttributeError: 'tuple' object has no attribute '_uses_learning_phase'
error message for calling functions which return tuple. So I changed tuple return values of some functions [ResidualConv1DGLU.call,CausalConv1D.call] to lists.
Change-4:
After completing this I was getting another error message. TypeError: This Layer takes an
inputsargument to call(), and only the
inputsargument may be specified as a positional argument. Pass everything else as a keyword argument (those arguments will not be tracked as inputs to the Layer).
. To fix this I've changed some lines to call methods with keyword arguments.[ResidualConv1DGLU.incremental_step,WaveNet.step]
Change 5:
I've replaced info
argument name of plot.plot_alignment
method call with title
argument [tacotron.Synthesizer.synthesize(1,2,3)] (This is because the method definition does not have info
parameter.
Change 6:
In wavenet_vocoder.modules.CausalConv1D.call
method when convolution_queue
is None, I was getting error message AttributeError: 'NoneType' object has no attribute '_keras_mask'
. To fix this issue I've changed the following code.
return tf.reshape(output, [batch_size, 1, self.layer.filters]), convolution_queue
to
if convolution_queue is None:
return tf.reshape(output, [batch_size, 1, self.layer.filters])
else:
return [tf.reshape(output, [batch_size, 1, self.layer.filters]), convolution_queue]
python preprocess.py
python train.py --model "Tacotron"
python synthesize.py --model Tacotron --GTA True --mode synthesis
python train.py --model "WaveNet"
python synthesize.py --model Tacotron --GTA False --mode eval --text_list test.txt
python synthesize.py --model WaveNet --mode eval
After this the program throws the error message.
Hello again @mitiku1, first of all thanks for all the details.
So considering I have moved from the previous commit a while back, I was not able to reproduce locally, I must have fixed it sometime between the previous and the current version. In any case, please try updating to latest version and let me know if the problem persists!
EDIT: keras wrappers bugs are still existent for tf 1.8 or later users. Working on an efficient fix. but your workarounds can work for now EDIT 2: I have pushed another commit with your fix, let me know if I missed anything! Thanks for the contribution :)
@Rayhane-mamah thanks again. Though I've tried with latest version of the repo the error message was persisted, till I completely changed the virtual environment. I installed tensorflow 1.10.0 and other libraries from requirement file of this repository. This way now the error message has gone. I'm able to run synthesize.py for wavenet vocoder. I'm not sure if changing the tensorflow version or changing version of other libraries solved the problem.
One last thing, though this is not the case for tensorflow 1.10, I think you forget here to pass values for non input argument as positional arguments.
Can anyone tell me how to use synthesize.py file on the pretrained model?
This is not the same as #163 because that issue is produced by running both models using
python train.py --model ="Both"
. I've also checked the variables stored inside checkpoint file using tensorflows print_tensors_in_checkpoint_file function and it's output is:-as it can be seen from above log, there is no value for
"WaveNet_model/WaveNet_model/inference/causal_conv1d/residual_block_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage"
variable.