oswaldoludwig / Seq2seq-Chatbot-for-Keras

This repository contains a new generative model of chatbot based on seq2seq modeling.
Apache License 2.0
331 stars 98 forks source link

limit = l[0][0] - IndexError: index 0 is out of bounds for axis 0 with size 0 #15

Open stajilov opened 6 years ago

stajilov commented 6 years ago

What can be the root cause for this?

for m in range(Epochs):

    # Loop over training batches due to memory constraints:
    for n in range(0,round_exem,step):

        q2 = q[n:n+step]
        print(q2)
        s = q2.shape
        print(s)
        count = 0
        for i, sent in enumerate(a[n:n+step]):
            print("Sentence")
            print(sent)
            l = np.where(sent==3)  #  the position of the symbol EOS
            limit = l[0][0]
            count += limit + 1
  File "train_bot.py", line 188, in <module>
    limit = l[0][0]
IndexError: index 0 is out of bounds for axis 0 with size 0

I don't see any 3 for some reason

Sentence
[   1   31    5  640    8 2475    9    8  339    4    2    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0]
oswaldoludwig commented 6 years ago

Hi, If you changed the dictionary (e.g. if you generated another dictionary), you have to look up the indexes of the special tokens "BOS" and "EOS" in your new dictionary and change them in the code, because it will not be 3 anymore. I should use a variable for this. :-) Fortunately, there is a comment like "the position of the symbol EOS" after every line containing a reference to these special tokens, which make it ease to change it.

stajilov commented 6 years ago

@oswaldoludwig thanks, so I just look into the dictionary and find BOS and EOS tokens? thanks

oswaldoludwig commented 6 years ago

Yes, the tokens of your new dictionary will be sorted by their frequency in the training dataset, i.e. the EOS (end of sentence) token had index 3 because it was the third most frequent token in my original training dataset. The easiest approach may be to move them to the same position as the original dictionary, this may solve your problem as well.

stajilov commented 6 years ago

@oswaldoludwig thanks,but which corpus did you use to create the dictionary? asking because it looks different than simple_dialogue corpus

oswaldoludwig commented 6 years ago

Yes, I created this dictionary when I was using the Cornell Movie Dialogs Corpus, before collecting data from English courses to compose my own dataset. So, I ended up keeping the dictionary I already had.