AnjaliDharmik / Text-Similarity-Using-Siamese-Deep-Neural-Network

It is a keras based implementation of Deep Siamese Bidirectional LSTM network to capture phrase/sentence similarity using word embedding.
16 stars 4 forks source link

Tokenizer error #2

Open wakamd opened 5 years ago

wakamd commented 5 years ago

Hi I have been getting this error:

First: In model.py..that tokenizer is not defined. Then I added this in test_dataset_model(df_test,model):

added 2019

tokenizer, embedding_matrix = pre_processing.word_embed_meta_data(question1_test + question2_test,  EMBEDDING_DIM)
embedding_meta_data = {'tokenizer': tokenizer,'embedding_matrix': embedding_matrix}

After trunning the wrapper.py, I got this errors:

Traceback (most recent call last): File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/Wrapper.py", line 22, in test_results = model.test_dataset_model(df_test,train_model) File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/model.py", line 66, in test_dataset_model tokenizer, embedding_matrix = pre_processing.word_embed_meta_data(question1 + question2, EMBEDDING_DIM) NameError: global name 'question1' is not defined

Traceback (most recent call last): File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/Wrapper.py", line 22, in test_results = model.test_dataset_model(df_test,train_model) File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/model.py", line 69, in test_dataset_model test_data_x1, test_data_x2, leaks_test = pre_processing.create_test_data(tokenizer,questions_test_pair, MAX_SEQUENCE_LENGTH) File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/pre_processing.py", line 80, in create_test_data test_questions_1 = tokenizer.texts_to_questions(test_questions1) AttributeError: 'Tokenizer' object has no attribute 'texts_to_questions'

wakamd commented 5 years ago

I'm using ubuntu 18.04LTS, python 2.7