keithito / tacotron

A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
MIT License
2.94k stars 965 forks source link

RuntimeError: Attempted to use a closed Session. #305

Closed mirfan899 closed 4 years ago

mirfan899 commented 4 years ago

I'm trying to train the model on Ubuntu 16.04 with GPU Tesla K80 and CUDA 9, CUDNN 7 and tensorflow-gpu==1.12, having this issue whey try to train the model

UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
         [[node model/inference/encoder_cbhg/conv_bank/conv1d_1/conv1d/conv1d/Conv2D (defined at /home/virtuoso_irfan/tacotron/models/modules.py:106)  = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](model/optimizer/gradients/model/inference/encoder_cbhg/conv_bank/conv1d_1/conv1d/conv1d/Conv2D_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, model/inference/encoder_cbhg/conv_bank/conv1d_1/conv1d/conv1d/ExpandDims_1)]]
         [[{{node model/inference/decoder/while/BasicDecoderStep/decoder/output_projection_wrapper/output_projection_wrapper/multi_rnn_cell/cell_0/cell_0/output_projection_wrapper/output_projection_wrapper/concat_output_and_attention_wrapper/concat_output_and_attention_wrapper/decoder_prenet_wrapper/decoder_prenet_wrapper/attention_wrapper/assert_equal/Equal/Enter/_635}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7546_...qual/Enter", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopmodel/inference/decoder/while/BasicDecoderStep/decoder/output_projection_wrapper/output_projection_wrapper/multi_rnn_cell/cell_0/cell_0/output_projection_wrapper/output_projection_wrapper/concat_output_and_attention_wrapper/concat_output_and_attention_wrapper/decoder_prenet_wrapper/decoder_prenet_wrapper/attention_wrapper/assert_equal/Assert/Assert/data_0/_5)]]

Traceback (most recent call last):
  File "/home/virtuoso_irfan/tacotron/datasets/datafeeder.py", line 74, in run
    self._enqueue_next_group()
  File "/home/virtuoso_irfan/tacotron/datasets/datafeeder.py", line 96, in _enqueue_next_group
    self._session.run(self._enqueue_op, feed_dict=feed_dict)
  File "/home/virtuoso_irfan/tacotron/.t1/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/virtuoso_irfan/tacotron/.t1/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1075, in _run
    raise RuntimeError('Attempted to use a closed Session.')
RuntimeError: Attempted to use a closed Session.
xieyuankun commented 4 years ago

try to increase the value of max_iters and make sure the values of batch_size and output_perstep are appropriate.

mirfan899 commented 4 years ago

Well, it was related to CUDNN version issue. I had 7.0 but tacotron required 7.1.4. After updating CUDNN I'm getting this error

heckpoint path: ./logs-tacotron/model.ckpt
Loading training data from: ./training/train.txt
Using model: tacotron
Hyperparameters:
  adam_beta1: 0.9
  adam_beta2: 0.999
  attention_depth: 256
  batch_size: 32
  cleaners: english_cleaners
  decay_learning_rate: True
  decoder_depth: 256
  embed_depth: 256
  encoder_depth: 256
  frame_length_ms: 50
  frame_shift_ms: 12.5
  griffin_lim_iters: 60
  initial_learning_rate: 0.002
  max_iters: 200
  min_level_db: -100
  num_freq: 1025
  num_mels: 80
  outputs_per_step: 5
  postnet_depth: 256
  power: 1.5
  preemphasis: 0.97
  prenet_depths: [256, 128]
  ref_level_db: 20
  sample_rate: 20000
  use_cmudict: False
Loaded metadata for 850 examples (1.81 hours)
Initialized Tacotron model. Dimensions: 
  embedding:               256
  prenet out:              128
  encoder out:             256
  attention out:           256
  concat attn & out:       512
  decoder cell out:        256
  decoder out (5 frames):  400
  decoder out (1 frame):   80
  postnet out:             256
  linear out:              1025
Starting new training run at commit: None
Generated 32 batches of size 32 in 46.279 sec
Step 1       [60.680 sec/step, loss=0.82188, avg_loss=0.82188]
Step 2       [32.100 sec/step, loss=0.81987, avg_loss=0.82088]
Step 3       [22.420 sec/step, loss=0.82569, avg_loss=0.82248]
Step 4       [17.712 sec/step, loss=0.81703, avg_loss=0.82112]
Exiting due to exception: Incompatible shapes: [32,1285,80] vs. [32,1000,80]
         [[node model/loss/sub (defined at /home/virtuoso_irfan/tacotron/models/tacotron.py:118)  = Sub[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](datafeeder/input_queue_Dequeue/_21, model/inference/Reshape)]]
         [[{{node model/optimizer/gradients/model/inference/post_cbhg/highway_2/H/Tensordot/Reshape_grad/Shape/_461}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_4962_...grad/Shape", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
mirfan899 commented 4 years ago

The issue is fixed after updating max_iters=300 in hparams.py