Rayhane-mamah / Tacotron-2

DeepMind's Tacotron-2 Tensorflow implementation
MIT License
2.27k stars 905 forks source link

Please Help me about this #104

Open ilhamprayudha opened 6 years ago

ilhamprayudha commented 6 years ago

1.is there any limit on data when preprocessing ? I use the dataset with 15438 when in preprocessing it only produces 12988

  1. what can be changed on h.params to solve OOM on wavenet training
cobr123 commented 6 years ago
  1. in h.params clip_mels_length = True, #For cases of OOM (Not really recommended, working on a workaround) max_mel_frames = 900, #Only relevant when clip_mels_length = True
ilhamprayudha commented 6 years ago

why train_step is in default, whether the result of loss is good if train step above default or under default ?

I've tried using ljspeech dataset, why only 1 sentence is generated in wavenet?

how to use the model?

Rayhane-mamah commented 6 years ago

Hello @ilhamprayudha, please refer to the README for information on how to use the model (both during training and synthesis).

like @cobr123 said, "max_mel_frames" is responsible for limiting data by length so you don't get OOM errors during training. Test out different max_mel_frames values for what suits you best. You can also change "outputs_per_step" to omit OOM errors by making it bigger (I don't recommend going bigger than 3). It is also recommended to keep batch_size=32.

Can you give clear explanation about: "why only 1 sentence is generated in wavenet?" thanks.

ilhamprayudha commented 6 years ago
  1. How about "We also present timings for sample generation, demonstrating more than 1000× speed-up relative to original WaveNet." In paper paralel wavenet.

I generate take along time

  1. In paper paralel wavenet is using raw fotmat not mulaw format, so what format is in this code?
Rayhane-mamah commented 6 years ago

@ilhamprayudha I provided an answer here.

This repo speed: 1 second of audio is generated in 62 seconds.

ilhamprayudha commented 6 years ago

@Rayhane-mamah

  1. how about best format in your code is raw or mu-law, and What is the difference to tacotron-2

in your new repo is add preprocessing_wavenet why ? is the gta used better than synthesized tacotron-2

i try your new repo and in preprocess is /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88 return f(*args, **kwds)

Rayhane-mamah commented 6 years ago

Raw wavenet gives an overall better audio quality than the mulaw-quanitize but it is slower to converge.

wavenet_preprocess was added as a way to use Wavenet in a standalone fashion in case one is not interested in the Tacotron part.

Please make sure you have the same numpy version as in the requirements file.

ilhamprayudha commented 6 years ago

my numpy is same, even though such proceedings continue

this is also what happens when executing

`Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1322, in _do_call return fn(*args) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled [[Node: datafeeder/input_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/input_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/usr/lib/python3.5/threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "/home/nlplab/PKL/Tacotron-2/tacotron/feeder.py", line 166, in _enqueue_next_train_group self._session.run(self._enqueue_op, feed_dict=feed_dict) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1316, in _do_run run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled [[Node: datafeeder/input_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/input_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

Caused by op 'datafeeder/input_queue_enqueue', defined at: File "train.py", line 127, in main() File "train.py", line 121, in main train(args, log_dir, hparams) File "train.py", line 51, in train checkpoint = tacotron_train(args, log_dir, hparams) File "/home/nlplab/PKL/Tacotron-2/tacotron/train.py", line 305, in tacotron_train return train(log_dir, args, hparams) File "/home/nlplab/PKL/Tacotron-2/tacotron/train.py", line 123, in train feeder = Feeder(coord, input_path, hparams) File "/home/nlplab/PKL/Tacotron-2/tacotron/feeder.py", line 85, in init self._enqueue_op = queue.enqueue(self._placeholders) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 346, in enqueue self._queue_ref, vals, name=scope) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 3977, in queue_enqueue_v2 timeout_ms=timeout_ms, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3414, in create_op op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1740, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

CancelledError (see above for traceback): Enqueue operation was cancelled [[Node: datafeeder/input_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/input_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

Exception in thread background: Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1322, in _do_call return fn(*args) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled [[Node: datafeeder/eval_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/eval_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner self.run() File "/usr/lib/python3.5/threading.py", line 862, in run self._target(*self._args, **self._kwargs) File "/home/nlplab/PKL/Tacotron-2/tacotron/feeder.py", line 174, in _enqueue_next_test_group self._session.run(self._eval_enqueue_op, feed_dict=feed_dict) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1316, in _do_run run_metadata) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled [[Node: datafeeder/eval_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/eval_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

Caused by op 'datafeeder/eval_queue_enqueue', defined at: File "train.py", line 127, in main() File "train.py", line 121, in main train(args, log_dir, hparams) File "train.py", line 51, in train checkpoint = tacotron_train(args, log_dir, hparams) File "/home/nlplab/PKL/Tacotron-2/tacotron/train.py", line 305, in tacotron_train return train(log_dir, args, hparams) File "/home/nlplab/PKL/Tacotron-2/tacotron/train.py", line 123, in train feeder = Feeder(coord, input_path, hparams) File "/home/nlplab/PKL/Tacotron-2/tacotron/feeder.py", line 97, in init self._eval_enqueue_op = eval_queue.enqueue(self._placeholders) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 346, in enqueue self._queue_ref, vals, name=scope) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 3977, in queue_enqueue_v2 timeout_ms=timeout_ms, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3414, in create_op op_def=op_def) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1740, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

CancelledError (see above for traceback): Enqueue operation was cancelled [[Node: datafeeder/eval_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/eval_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

` why did that happen

Rayhane-mamah commented 6 years ago

@ilhamprayudha, according to this issue, it appears that your numpy warning can be ignored safely, as for the enqueue canceling, it happens when a bug occurs during training and the feeder drops all queues. The log you provided doesn't hold the main reason of the crash though so I can't really say what's going on with your run. (useful crash log is usually the first lines of the crash log).

Try maybe running the training and redirecting all "stdout" to a log file so that we can get the entire crash log then send it to me. thanks :)

shartoo commented 6 years ago

I got exactly the same error as @ilhamprayudha described above using model Tacotron-2 with my own dataset,how to fix the probem?Here is detail log

[2018-08-27 03:22:24.026]  Exiting due to exception: assertion failed: [] [Condition x == y did not hold element-wise:] [x (model_1/inference/strided_slice_5:0) = ] [14100] [y (model_1/inference/strided_slice_6:0) = ] [12925]
     [[Node: model_1/inference/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model_1/inference/assert_equal/Equal/_27, model_1/loss/assert_equal_2/Assert/Assert/data_0, model_1/loss/assert_equal_2/Assert/Assert/data_1, model_1/inference/assert_equal/Assert/Assert/data_2, model_1/inference/strided_slice_5/_29, model_1/inference/assert_equal/Assert/Assert/data_4, model_1/inference/strided_slice_1/_31)]]

Caused by op 'model_1/inference/assert_equal/Assert/Assert', defined at:
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 133, in <module>
    main()
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 127, in main
    train(args, log_dir, hparams)
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 79, in train
    checkpoint = wavenet_train(args, log_dir, hparams, input_path)
  File "I:\newwork\Tacotron-2-mandarin-new\wavenet_vocoder\train.py", line 252, in wavenet_train
    return train(log_dir, args, hparams, input_path)
  File "I:\newwork\Tacotron-2-mandarin-new\wavenet_vocoder\train.py", line 176, in train
    model, stats = model_train_mode(args, feeder, hparams, global_step)
  File "I:\newwork\Tacotron-2-mandarin-new\wavenet_vocoder\train.py", line 124, in model_train_mode
    feeder.input_lengths, x=feeder.inputs)
  File "I:\newwork\Tacotron-2-mandarin-new\wavenet_vocoder\models\wavenet.py", line 176, in initialize
    y_hat = self.step(x, c, g, softmax=False) #softmax is automatically computed inside softmax_cross_entropy if needed
  File "I:\newwork\Tacotron-2-mandarin-new\wavenet_vocoder\models\wavenet.py", line 474, in step
    with tf.control_dependencies([tf.assert_equal(tf.shape(c)[-1], tf.shape(x)[-1])]):
  File "C:\Python36\lib\site-packages\tensorflow\python\ops\check_ops.py", line 405, in assert_equal
    return control_flow_ops.Assert(condition, data, summarize=summarize)
  File "C:\Python36\lib\site-packages\tensorflow\python\util\tf_should_use.py", line 118, in wrapped
    return _add_should_use_warning(fn(*args, **kwargs))
  File "C:\Python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 172, in Assert
    return gen_logging_ops._assert(condition, data, summarize, name="Assert")
  File "C:\Python36\lib\site-packages\tensorflow\python\ops\gen_logging_ops.py", line 51, in _assert
    name=name)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3392, in create_op
    op_def=op_def)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): assertion failed: [] [Condition x == y did not hold element-wise:] [x (model_1/inference/strided_slice_5:0) = ] [14100] [y (model_1/inference/strided_slice_6:0) = ] [12925]
     [[Node: model_1/inference/assert_equal/Assert/Assert = Assert[T=[DT_STRING, DT_STRING, DT_STRING, DT_INT32, DT_STRING, DT_INT32], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model_1/inference/assert_equal/Equal/_27, model_1/loss/assert_equal_2/Assert/Assert/data_0, model_1/loss/assert_equal_2/Assert/Assert/data_1, model_1/inference/assert_equal/Assert/Assert/data_2, model_1/inference/strided_slice_5/_29, model_1/inference/assert_equal/Assert/Assert/data_4, model_1/inference/strided_slice_1/_31)]]
shartoo commented 6 years ago

My log in console is

Traceback (most recent call last):
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1322, in _do_call
    return fn(*args)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1307, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1409, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled
     [[Node: datafeeder/eval_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/eval_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "C:\Python36\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\feeder.py", line 175, in _enqueue_next_test_group
    self._session.run(self._eval_enqueue_op, feed_dict=feed_dict)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 900, in run
    run_metadata_ptr)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run
    run_metadata)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled
     [[Node: datafeeder/eval_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/eval_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

Caused by op 'datafeeder/eval_queue_enqueue', defined at:
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 133, in <module>
    main()
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 127, in main
    train(args, log_dir, hparams)
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 51, in train
    checkpoint = tacotron_train(args, log_dir, hparams)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\train.py", line 310, in tacotron_train
    return train(log_dir, args, hparams)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\train.py", line 128, in train
    feeder = Feeder(coord, input_path, hparams)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\feeder.py", line 98, in __init__
    self._eval_enqueue_op = eval_queue.enqueue(self._placeholders)
  File "C:\Python36\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 346, in enqueue
    self._queue_ref, vals, name=scope)
  File "C:\Python36\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 4373, in queue_enqueue_v2
    timeout_ms=timeout_ms, name=name)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3392, in create_op
    op_def=op_def)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

CancelledError (see above for traceback): Enqueue operation was cancelled
     [[Node: datafeeder/eval_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/eval_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

Exception in thread background:
Traceback (most recent call last):
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1322, in _do_call
    return fn(*args)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1307, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1409, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled
     [[Node: datafeeder/input_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/input_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "C:\Python36\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\feeder.py", line 167, in _enqueue_next_train_group
    self._session.run(self._enqueue_op, feed_dict=feed_dict)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 900, in run
    run_metadata_ptr)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run
    run_metadata)
  File "C:\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.CancelledError: Enqueue operation was cancelled
     [[Node: datafeeder/input_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/input_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]

Caused by op 'datafeeder/input_queue_enqueue', defined at:
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 133, in <module>
    main()
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 127, in main
    train(args, log_dir, hparams)
  File "I:/newwork/Tacotron-2-mandarin-new/train.py", line 51, in train
    checkpoint = tacotron_train(args, log_dir, hparams)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\train.py", line 310, in tacotron_train
    return train(log_dir, args, hparams)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\train.py", line 128, in train
    feeder = Feeder(coord, input_path, hparams)
  File "I:\newwork\Tacotron-2-mandarin-new\tacotron\feeder.py", line 86, in __init__
    self._enqueue_op = queue.enqueue(self._placeholders)
  File "C:\Python36\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 346, in enqueue
    self._queue_ref, vals, name=scope)
  File "C:\Python36\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 4373, in queue_enqueue_v2
    timeout_ms=timeout_ms, name=name)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3392, in create_op
    op_def=op_def)
  File "C:\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

CancelledError (see above for traceback): Enqueue operation was cancelled
     [[Node: datafeeder/input_queue_enqueue = QueueEnqueueV2[Tcomponents=[DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](datafeeder/input_queue, _arg_datafeeder/inputs_0_1, _arg_datafeeder/input_lengths_0_0, _arg_datafeeder/mel_targets_0_3, _arg_datafeeder/token_targets_0_5, _arg_datafeeder/linear_targets_0_2, _arg_datafeeder/targets_lengths_0_4)]]
shartoo commented 6 years ago

I'm training a fresh new model without using pre-trained model.

shartoo commented 6 years ago

Found the solution here relationshio of hop size and scale prod. Thank you !