Kyubyong / tacotron

A TensorFlow Implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model
Apache License 2.0
1.83k stars 436 forks source link

Errors when running eval #74

Open ErfolgreichCharismatisch opened 7 years ago

ErfolgreichCharismatisch commented 7 years ago

I am using python 3.5. When running python eval.py I get

Graph loaded
name: GeForce GTX 960
major: 5 minor: 2 memoryClockRate (GHz) 1.1775
pciBusID 0000:01:00.0
Total memory: 2.00GiB
Free memory: 1.64GiB
2017-07-18 23:56:39.782604: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:961] DMA: 0
2017-07-18 23:56:39.783177: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0:   Y
2017-07-18 23:56:39.783513: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0)
WARNING:tensorflow:Standard services need a 'logdir' passed to the SessionManager
Restored!
Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1139, in _do_call
    return fn(*args)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1121, in _run_fn
    status, run_metadata)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\contextlib.py", line 66, in __exit__
    next(self.gen)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [When calling zero_state of AttentionWrapper attention_wrapper: Non-matching batch sizes between the memory (encoder output) and the requested batch size.  Are you using the BeamSearchDecoder?  If so, make sure your encoder output has been tiled to beam_width via tf.contrib.seq2seq.tile_batch, and the batch_size= argument passed to zero_state is batch_size * beam_width.] [Condition x == y did not hold element-wise:] [x (net/decoder1/attention_decoder/rnn/strided_slice:0) = ] [32] [y (net/decoder1/attention_decoder/BahdanauAttention/strided_slice_1:0) = ] [1]
         [[Node: net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/All/_891, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_0, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_1, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_2, net/decoder1/attention_decoder/rnn/strided_slice/_893, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_4, net/decoder1/attention_decoder/BahdanauAttention/strided_slice_1/_895)]]
         [[Node: net/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/assert_equal/Assert/Assert/_910 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_1774_net/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/assert_equal/Assert/Assert", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^_cloopnet/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/checked_cell_output/_512)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "eval.py", line 69, in <module>
    eval()
  File "eval.py", line 47, in eval
    _outputs1 = sess.run(g.outputs1, {g.x: X, g.y: outputs1})
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 789, in run
    run_metadata_ptr)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 997, in _run
    feed_dict_string, options, run_metadata)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1132, in _do_run
    target_list, options, run_metadata)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [When calling zero_state of AttentionWrapper attention_wrapper: Non-matching batch sizes between the memory (encoder output) and the requested batch size.  Are you using the BeamSearchDecoder?  If so, make sure your encoder output has been tiled to beam_width via tf.contrib.seq2seq.tile_batch, and the batch_size= argument passed to zero_state is batch_size * beam_width.] [Condition x == y did not hold element-wise:] [x (net/decoder1/attention_decoder/rnn/strided_slice:0) = ] [32] [y (net/decoder1/attention_decoder/BahdanauAttention/strided_slice_1:0) = ] [1]
         [[Node: net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/All/_891, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_0, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_1, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_2, net/decoder1/attention_decoder/rnn/strided_slice/_893, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_4, net/decoder1/attention_decoder/BahdanauAttention/strided_slice_1/_895)]]
         [[Node: net/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/assert_equal/Assert/Assert/_910 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_1774_net/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/assert_equal/Assert/Assert", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^_cloopnet/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/checked_cell_output/_512)]]

Caused by op 'net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert', defined at:
  File "eval.py", line 69, in <module>
    eval()
  File "eval.py", line 27, in eval
    g = Graph(is_training=False)
  File "E:\Python\Projekte\tacotron\train.py", line 49, in __init__
    is_training=is_training) # (N, T', hp.n_mels*hp.r)
  File "E:\Python\Projekte\tacotron\networks.py", line 85, in decode1
    dec = attention_decoder(dec, memory, num_units=hp.embed_size) # (N, T', E)
  File "E:\Python\Projekte\tacotron\modules.py", line 251, in attention_decoder
    dtype=tf.float32) #( N, T', 16)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\rnn.py", line 548, in dynamic_rnn
    state = cell.zero_state(batch_size, dtype)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\attention_wrapper.py", line 659, in zero_state
    message=error_message)]):
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\check_ops.py", line 318, in assert_equal
    return control_flow_ops.Assert(condition, data, summarize=summarize)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\util\tf_should_use.py", line 170, in wrapped
    return _add_should_use_warning(fn(*args, **kwargs))
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 124, in Assert
    condition, data, summarize, name="Assert")
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\gen_logging_ops.py", line 37, in _assert
    summarize=summarize, name=name)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "C:\Users\User\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): assertion failed: [When calling zero_state of AttentionWrapper attention_wrapper: Non-matching batch sizes between the memory (encoder output) and the requested batch size.  Are you using the BeamSearchDecoder?  If so, make sure your encoder output has been tiled to beam_width via tf.contrib.seq2seq.tile_batch, and the batch_size= argument passed to zero_state is batch_size * beam_width.] [Condition x == y did not hold element-wise:] [x (net/decoder1/attention_decoder/rnn/strided_slice:0) = ] [32] [y (net/decoder1/attention_decoder/BahdanauAttention/strided_slice_1:0) = ] [1]
         [[Node: net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/All/_891, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_0, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_1, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_2, net/decoder1/attention_decoder/rnn/strided_slice/_893, net/decoder1/attention_decoder/rnn/AttentionWrapperZeroState/assert_equal/Assert/Assert/data_4, net/decoder1/attention_decoder/BahdanauAttention/strided_slice_1/_895)]]
         [[Node: net/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/assert_equal/Assert/Assert/_910 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_1774_net/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/assert_equal/Assert/Assert", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](^_cloopnet/decoder1/attention_decoder/rnn/while/rnn/attention_wrapper/checked_cell_output/_512)]]

What to do?

ErfolgreichCharismatisch commented 7 years ago

I need hints on how to solve this.

timbrucks commented 6 years ago

I am having the same issue. Did you have any luck tracking it down?

I am running on OS X 10.13, using TensorFlow 1.4 and Python 3.6 (Anaconda distribution).

dileepfrog commented 6 years ago

Having same issue

steve1991 commented 6 years ago

same issue, anyone solved?