carpedm20 / multi-speaker-tacotron-tensorflow

Multi-speaker Tacotron in TensorFlow.
http://carpedm20.github.io/tacotron
Other
631 stars 342 forks source link

AttentionWrapperState Missing attention_state argument #48

Open lucasjinreal opened 6 years ago

lucasjinreal commented 6 years ago

Hi, when runing the codes, there are error says:

Traceback (most recent call last):
  File "train.py", line 336, in <module>
    main()
  File "train.py", line 332, in main
    train(config.model_dir, config)
  File "train.py", line 160, in train
    is_randomly_initialized=is_randomly_initialized)
  File "/media/jintian/sg/ai/lab/voice/multi-speaker-tacotron-tensorflow/models/tacotron.py", line 170, in initialize
    cells = [OutputProjectionWrapper(concat_cell, hp.dec_rnn_size)]
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell.py", line 356, in __init__
    rnn_cell_impl.assert_like_rnncell("cell", cell)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn_cell_impl.py", line 77, in assert_like_rnncell
    hasattr(cell, "output_size"),
  File "/media/jintian/sg/ai/lab/voice/multi-speaker-tacotron-tensorflow/models/rnn_wrappers.py", line 406, in output_size
    return self._cell.output_size + self._cell.state_size.attention
  File "/media/jintian/sg/ai/lab/voice/multi-speaker-tacotron-tensorflow/models/rnn_wrappers.py", line 187, in state_size
    () for _ in self._attention_mechanisms))    # sometimes a TensorArray
TypeError: __new__() missing 1 required positional argument: 'attention_state'

The newest tensorflow needs an attention_state argument, how to adding that? Inside this function.

 return AttentionWrapperState(
                    cell_state=cell_state,
                    time=tf.zeros([], dtype=tf.int32),
                    attention=_zero_state_tensors(self._attention_layer_size, batch_size, dtype),
                    alignments=self._item_or_tuple(
                            attention_mechanism.initial_alignments(batch_size, dtype)
                            for attention_mechanism in self._attention_mechanisms),
                    alignment_history=self._item_or_tuple(
                            tf.TensorArray(dtype=dtype, size=0, dynamic_size=True)
                            if self._alignment_history else ()
                            for _ in self._attention_mechanisms))
kdw9502 commented 5 years ago

you should downgrade tensorflow to version 1.3.0

anirudhsr7 commented 5 years ago

did you fix it?

Cocozy commented 5 years ago

Not yet , do you have any idea??

-----Original Message----- From: "Anirudh Singh"notifications@github.com To: "carpedm20/multi-speaker-tacotron-tensorflow"multi-speaker-tacotron-tensorflow@noreply.github.com; Cc: "Subscribed"subscribed@noreply.github.com; Sent: 2018-10-10 (수) 00:37:05 Subject: Re: [carpedm20/multi-speaker-tacotron-tensorflow] AttentionWrapperState Missing attention_state argument (#48)

did you fix it? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

Ella77 commented 5 years ago

because of this depreciated TensorFlow version ...

you could add in model/ rnn_wrappers.py in function state_size, zero_state

add those required attention_state

return AttentionWrapperState( .... attention_state=self._item_or_tuple( attention_mechanism.initial_state(batch_size, dtype) for attention_mechanism in self._attention_mechanisms),

....)

References: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py

faaip commented 3 years ago

Hi @Ella77. After doing that, I get the following error:

File "train.py", line 336, in <module> main() File "train.py", line 332, in main train(config.model_dir, config) File "train.py", line 160, in train is_randomly_initialized=is_randomly_initialized) File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/multivocal/code/Users/frederik/multi-speaker-tacotron-tensorflow/models/tacotron.py", line 170, in initialize cells = [OutputProjectionWrapper(concat_cell, hp.dec_rnn_size)] File "/home/azureuser/.local/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell.py", line 356, in __init__ rnn_cell_impl.assert_like_rnncell("cell", cell) File "/home/azureuser/.local/lib/python3.6/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 91, in assert_like_rnncell _hasattr(cell, "output_size"), File "/home/azureuser/.local/lib/python3.6/site-packages/tensorflow/python/ops/rnn_cell_impl.py", line 68, in _hasattr getattr(obj, attr_name) File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/multivocal/code/Users/frederik/multi-speaker-tacotron-tensorflow/models/rnn_wrappers.py", line 406, in output_size return self._cell.output_size + self._cell.state_size.attention File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/multivocal/code/Users/frederik/multi-speaker-tacotron-tensorflow/models/rnn_wrappers.py", line 180, in state_size for attention_mechanism in self._attention_mechanisms), File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/multivocal/code/Users/frederik/multi-speaker-tacotron-tensorflow/models/rnn_wrappers.py", line 162, in _item_or_tuple t = tuple(seq) File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/multivocal/code/Users/frederik/multi-speaker-tacotron-tensorflow/models/rnn_wrappers.py", line 180, in <genexpr> for attention_mechanism in self._attention_mechanisms), NameError: name 'batch_size' is not defined

Do you know a fix for this?