Traceback (most recent call last):
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/Wrapper.py", line 22, in
test_results = model.test_dataset_model(df_test,train_model)
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/model.py", line 66, in test_dataset_model
tokenizer, embedding_matrix = pre_processing.word_embed_meta_data(question1 + question2, EMBEDDING_DIM)
NameError: global name 'question1' is not defined
Traceback (most recent call last):
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/Wrapper.py", line 22, in
test_results = model.test_dataset_model(df_test,train_model)
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/model.py", line 69, in test_dataset_model
test_data_x1, test_data_x2, leaks_test = pre_processing.create_test_data(tokenizer,questions_test_pair, MAX_SEQUENCE_LENGTH)
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/pre_processing.py", line 80, in create_test_data
test_questions_1 = tokenizer.texts_to_questions(test_questions1)
AttributeError: 'Tokenizer' object has no attribute 'texts_to_questions'
Hi I have been getting this error:
First: In model.py..that tokenizer is not defined. Then I added this in test_dataset_model(df_test,model):
added 2019
After trunning the wrapper.py, I got this errors:
Traceback (most recent call last): File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/Wrapper.py", line 22, in
test_results = model.test_dataset_model(df_test,train_model)
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/model.py", line 66, in test_dataset_model
tokenizer, embedding_matrix = pre_processing.word_embed_meta_data(question1 + question2, EMBEDDING_DIM)
NameError: global name 'question1' is not defined
Traceback (most recent call last): File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/Wrapper.py", line 22, in
test_results = model.test_dataset_model(df_test,train_model)
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/model.py", line 69, in test_dataset_model
test_data_x1, test_data_x2, leaks_test = pre_processing.create_test_data(tokenizer,questions_test_pair, MAX_SEQUENCE_LENGTH)
File "/home/mauricewaka/Documents/Text-Similarity-Using-Siamese-Deep-Neural-Network-master/pre_processing.py", line 80, in create_test_data
test_questions_1 = tokenizer.texts_to_questions(test_questions1)
AttributeError: 'Tokenizer' object has no attribute 'texts_to_questions'