suriyadeepan / practical_seq2seq

A simple, minimal wrapper for tensorflow's seq2seq module, for experimenting with datasets rapidly
http://suriyadeepan.github.io/2016-12-31-practical-seq2seq/
GNU General Public License v3.0
569 stars 270 forks source link

Importing using the last check point #12

Open charan16 opened 7 years ago

charan16 commented 7 years ago

Attempting to use uninitialized value decoder/embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/Adam_1 [[Node: decoder/embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/Adam_1/read = IdentityT=DT_FLOAT, _class=["loc:@decoder/embedding_rnn_seq2seq/embedding_rnn_decoder/embedding"], _device="/job:localhost/replica:0/task:0/cpu:0"]]

superMDguy commented 7 years ago

I'm having the same issue.

rojansudev commented 7 years ago

Me too.Anybody got the fix?

nunezpaul commented 6 years ago

Hey I was having this problem too. The problem is you're likely not importing the ckpt files.

Turns out that running the code straight will not give you an error if you haven't actually loaded the checkpoint file since the code will skip over the loading if it doesn't find the files.

 def restore_last_session(self):
    saver = tf.train.Saver()
    # create a session
    sess = tf.Session()
    # get checkpoint state
    ckpt = tf.train.get_checkpoint_state(self.ckpt_path) 
    # restore session
    if ckpt and ckpt.model_checkpoint_path: [<<<<<<<<<<<<<]
        saver.restore(sess, ckpt.model_checkpoint_path)
    # return to user
    return sess

To fix it this issue, you need to do either one of the following after pulling and decompressing the model.

1) make sure to either have all the ckpt files directly in the ckpt folder 2) modify the ckpt_path (line 29 in chatbot.py) to be

`ckpt = 'ckpt/seq2seq_twitter_1024x3h_i43000'`

*assuming that your uncompressed folder is named the same as mine. If not then change seq2seq_twitter_1024x3h_i43000 to whatever you've named it.

That solved my problem and will likely fix yours.