Kyubyong / tacotron

A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model
Apache License 2.0
1.83k stars 436 forks source link

eval.py key not found error #78

Open sniperwrb opened 7 years ago

sniperwrb commented 7 years ago

Whenever I use norm_type="ins" and do speech reconstruction with model trained from Bible dataset (also trained with norm_type="ins"), I get such error information. It can run correctly with the pretrained model, but if I train even 1 epoch after it, there would be this error too. If I use norm_type="None", it can reconstruct a wavfile, but it is repeating nonsense noise, for both pretrained model and the model I trained.

2017-07-26 13:03:55.649744: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/decoder2/conv1d_banks/normalize/Variable_1 not found in checkpoint 2017-07-26 13:03:55.682754: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/decoder2/conv1d_banks/normalize/Variable not found in checkpoint 2017-07-26 13:03:55.704804: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/decoder2/norm2/Variable not found in checkpoint 2017-07-26 13:03:55.719144: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/decoder2/norm1/Variable_1 not found in checkpoint 2017-07-26 13:03:55.721827: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/encoder/conv1d_banks/normalize/Variable_1 not found in checkpoint 2017-07-26 13:03:55.728981: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/encoder/conv1d_banks/normalize/Variable not found in checkpoint 2017-07-26 13:03:55.731421: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/decoder2/norm1/Variable not found in checkpoint 2017-07-26 13:03:55.734391: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/decoder2/norm2/Variable_1 not found in checkpoint 2017-07-26 13:03:55.753259: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/encoder/norm1/Variable_1 not found in checkpoint 2017-07-26 13:03:55.756843: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/encoder/norm1/Variable not found in checkpoint 2017-07-26 13:03:55.757504: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/encoder/norm2/Variable not found in checkpoint 2017-07-26 13:03:55.760652: W tensorflow/core/framework/op_kernel.cc:1158] Not found: Key net/encoder/norm2/Variable_1 not found in checkpoint Traceback (most recent call last): File "eval.py", line 82, in eval() File "eval.py", line 42, in eval sv.saver.restore(sess, tf.train.latest_checkpoint(hp.logdir)) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1548, in restore {self.saver_def.filename_tensor_name: save_path}) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 789, in run run_metadata_ptr) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 997, in _run feed_dict_string, options, run_metadata) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run target_list, options, run_metadata) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key net/decoder2/conv1d_banks/normalize/Variable_1 not found in checkpoint [[Node: save/RestoreV2_28 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_28/tensor_names, save/RestoreV2_28/shape_and_slices)]]

Caused by op u'save/RestoreV2_28', defined at: File "eval.py", line 82, in eval() File "eval.py", line 39, in eval sv = tf.train.Supervisor() File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 300, in init self._init_saver(saver=saver) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/supervisor.py", line 448, in _init_saver saver = saver_mod.Saver() File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1139, in init self.build() File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1170, in build restore_sequentially=self._restore_sequentially) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 691, in build restore_sequentially, reshape) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 407, in _AddRestoreOps tensors = self.restore_op(filename_tensor, saveable, preferred_shard) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 247, in restore_op [spec.tensor.dtype])[0]) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 640, in restore_v2 dtypes=dtypes, name=name) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/ruobai/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1269, in init self._traceback = _extract_stack()

NotFoundError (see above for traceback): Key net/decoder2/conv1d_banks/normalize/Variable_1 not found in checkpoint [[Node: save/RestoreV2_28 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_save/Const_0_0, save/RestoreV2_28/tensor_names, save/RestoreV2_28/shape_and_slices)]]

AzamRabiee commented 7 years ago

same issue I have ...

keicoon commented 7 years ago

How about check this issue(#61)?

sniperwrb commented 7 years ago

@keicoon Not that problem... It is just something wrong with train_multi_gpus.py, I think, because train.py is fine.

jackchinor commented 7 years ago

same issue,does anyone solve it?I'm going crazy on this issue……

jazzsonpark commented 7 years ago

The identical issue I faced to. ... Any body figured this out?