As @syang1993 has mentioned in the README that this is based on Keithito's implementation of Tacotron. I tried to compare training script for both the projects (this one and Keithito's implementation) and it seems pretty similar. So I was wondering if there is any way I could use Keithito's pre-trained model here to synthesize speech. Right now I am getting the following error while attempting to do so:
-> "tensorflow.python.framework.errors_impl.NotFoundError: Key model/inference/Multihead-attention/attention_b not found in checkpoint"
-> "[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]"
If someonce could throw some light on how to do this if in case it is possible to do it in the first place.
As @syang1993 has mentioned in the README that this is based on Keithito's implementation of Tacotron. I tried to compare training script for both the projects (this one and Keithito's implementation) and it seems pretty similar. So I was wondering if there is any way I could use Keithito's pre-trained model here to synthesize speech. Right now I am getting the following error while attempting to do so: -> "tensorflow.python.framework.errors_impl.NotFoundError: Key model/inference/Multihead-attention/attention_b not found in checkpoint" -> "[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]"
If someonce could throw some light on how to do this if in case it is possible to do it in the first place.
Thanks.