Currie32 / Chatbot-from-Movie-Dialogue

Built a simple chatbot from a sequence-to-sequence model with TensorFlow.
146 stars 112 forks source link

AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'prepare_attention' #7

Open blindfish opened 7 years ago

blindfish commented 7 years ago

This part of the code:

# Create the training and inference logits
train_logits, inference_logits = seq2seq_model(
    tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(answers_vocab_to_int), 
    len(questions_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, 
    questions_vocab_to_int)

Fails with the following error. Anyone get the code to work with TF 1.2.1 or 1.3?

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-42-5eb2c4ab2c25> in <module>()
     15     tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(answers_vocab_to_int),
     16     len(questions_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers,
---> 17     questions_vocab_to_int)
     18 
     19 # Create a tensor for the inference logits, needed if loading a checkpoint version of the model

<ipython-input-39-bbac5bbc5884> in seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, answers_vocab_size, questions_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, questions_vocab_to_int)
     24                                                 questions_vocab_to_int,
     25                                                 keep_prob,
---> 26                                                 batch_size)
     27     return train_logits, infer_logits

<ipython-input-38-4c62787c7f16> in decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, vocab_to_int, keep_prob, batch_size)
     24                                             output_fn,
     25                                             keep_prob,
---> 26                                             batch_size)
     27         decoding_scope.reuse_variables()
     28         infer_logits = decoding_layer_infer(encoder_state, 

<ipython-input-36-c7b11c624372> in decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob, batch_size)
      5     attention_states = tf.zeros([batch_size, 1, dec_cell.output_size])
      6 
----> 7     att_keys, att_vals, att_score_fn, att_construct_fn =             tf.contrib.seq2seq.prepare_attention(attention_states,
      8                                                  attention_option="bahdanau",
      9                                                  num_units=dec_cell.output_size)

AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'prepare_attention'
shreyneil commented 7 years ago

Use tensorflow version 1.0.0, it will only work with that.

abhibisht89 commented 7 years ago

I am also having the same issue, do I need to downgrade(as i am having tf version '1.3.0') the tf version? or there is some other solution to this issue?

Khanquer17 commented 6 years ago

same issue

shreyneil commented 6 years ago

You have to downgrade, as downgrading worked for me.

saurabhrathor commented 6 years ago

Could not find tensorflow version 1.0

C:\Users\eratsau>pip install tensorflow==1.0 Collecting tensorflow==1.0 Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow==1.0 (from versions: 1.2.0rc2, 1.2.0, 1.2.1, 1.3.0rc0, 1.3.0rc1, 1.3.0rc2, 1.3.0, 1.4.0rc0, 1.4.0rc1, 1.4.0, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.6.0rc0) No matching distribution found for tensorflow==1.0

adithyaChander commented 6 years ago

Use Python version = 3.5. (Upgrade/downgrade python version accordingly) Then downgrade tensorflow version to 1.0.0. This should work :)

sumanthd17 commented 6 years ago

what is the alternative for tf.contrib.seq2seq.prepare_attention() in the tensorflow API 1.8

rahulv1993 commented 6 years ago

I am unable to downgrade tensorflow 1.9 to 1.0 on anaconda. Tried many ways. please help me with this, thanks!

Shanunicorn commented 6 years ago

is there any other way to clear this issue without downgrading the version of tensorflow

Joshsnailz commented 6 years ago

Is there any other workaround without having to downgrade my tensorflow

Swaminathan-R commented 6 years ago

Is there any other workaround other than downgrading TensorFlow?

RamKumar-T-R commented 1 year ago

What is the alternative for tensorflow.contrib.seq2seq.prepare_attention() in tensorflow 1.13 or 1.14