lyeoni / nlp-tutorial

A list of NLP(Natural Language Processing) tutorials
MIT License
1.37k stars 264 forks source link

Using the classifier #5

Open davidniki02 opened 5 years ago

davidniki02 commented 5 years ago

Hi After saving the model in news-category-classification, how do you actually use it to predict text classification? Can you put up an example, please?

lyeoni commented 5 years ago

@davidniki02 I wrote simple sample code to know how to save/load the model. After loading the model, you can predict/do classification as you did.

# save model architecture 
model_json = model.to_json()
with open("model.json", "w") as json_file : 
    json_file.write(model_json)

# save weights 
model.save_weights("model.h5")
# load model architecture
from keras.models import model_from_json
with open("model.json", "r") as json_file:
    loaded_model_json = json_file.read()
loaded_model = model_from_json(loaded_model_json)

# load weights
loaded_model.load_weights("model.h5")
davidniki02 commented 5 years ago

@lyeoni thanks, but I think we also need to store the tokenizer? I stored it using pickle.

Here is my code but it predicts the same category all the time:


from keras.models import load_model
import keras.preprocessing.text as kpt
import numpy as np
import pandas as pd
import pickle

# loading
model = load_model('model_20190519141343.h5')

handle = open('tokenizer.pickle', 'rb')
tk = pickle.load(handle)

data = pd.read_json('News_Category_Dataset_v2.json', lines=True).drop(['authors', 'date', 'link'], axis=1)

while 1:
    text = input("Say something: ")

    if len(text) == 0:
        break

    #tk.fit_on_texts(text)
    #converts the texts to the index equivalents in our dictionary
    pred = tk.texts_to_sequences([text])
    print(pred)

    #onehot representation of all words in the evaluation text, and how they appear in our dictionary
    #input = tk.sequences_to_matrix(pred, mode='binary')

    arr = np.zeros(50).reshape(1, 50)
    print(arr)
    for i, word in enumerate(pred[0]):
        arr[0][i] = word
    print(arr)

    prediction = model.predict(arr)
    print(prediction)
    cls = np.argmax(prediction)

    print(cls)
    print(data['category'][cls])
lyeoni commented 5 years ago

@davidniki02

It depends on what kind of tokenizer you use. For example, if you use nltk.mosestokenizer (in nltk.tokenize.moses), you don't need to save/load the saved tokenizer. Just call the function, and use returned tokenizer instance.

>>> m = MosesTokenizer()
>>> m.tokenize('2016, pp.')
    [u'2016', u',', u'pp', u'.']
davidniki02 commented 5 years ago

Thanks @lyeoni, You are using MosesTokenizer in tokenization_en.py but in data_loader.py, it is using Tokenizer from keras.preprocessing.text I updated token_to_index in data_loader to store the tokenizer: self.tokenizer = tokenizer then saved it during training:

    # save model
    model.save('model_'+current+'.h5')
    print('MODEL SAVED')

    # saving
    with open('tokenizer.pickle', 'wb') as handle:
        pickle.dump(loader.tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)

The problem is it predicts the same category all the time (with the same Huffpost dataset you have used) Also, MosesTokenizer does not have any texts_to_sequences so I don't think it can be used directly for prediction?

davidniki02 commented 5 years ago

Any luck, @lyeoni ?

lyeoni commented 5 years ago

Sorry for the delay in replying,

The reason why I used 2 tokenizers(MosesTokenizer, Keras tokenizer) is:

@davidniki02, I want to know if your tokenizer is properly saved/loaded. Because I'm not sure of if the method to save tokenizer works well. Please, check that the sample text is correctly tokenized.

davidniki02 commented 5 years ago

Thanks for replying @lyeoni Using MosesTokenizer does not return the numeric representation of the array we need to pass to model.predict

Here is the latest not working code:

# loading
model = load_model('model_20190519141343.h5')

handle = open('tokenizer.pickle', 'rb')
#tk = pickle.load(handle)
tk = MosesTokenizer()

data = pd.read_json('News_Category_Dataset_v2.json', lines=True).drop(['authors', 'date', 'link'], axis=1)

while 1:
    text = input("Say something: ")

    if len(text) == 0:
        break

    #tk.fit_on_texts(text)
    #converts the texts to the index equivalents in our dictionary
    tokens = tk.tokenize(text.strip(), escape=False)
    print(tokens)

    arr = np.zeros(50).reshape(1, 50)
    print(arr)
    for i, word in enumerate(tokens):
        arr[0][i] = word
    print(arr)

    prediction = model.predict(arr)
    print(prediction)
    cls = np.argmax(prediction)

    print(cls)
    print(data['category'][cls])

I probably need to use keras tokenizer to convert it to numbers (and get rid of the numpy array), e.g.

tokenizer = Tokenizer(num_words = 50000+1, oov_token='UNK')
tokenizer.texts_to_sequences(tokens)

but I don't know if reload the corpus and concatenate the new text to it, if I need to fit_text etc.

# token_to_index
tokens = tokenized_corpus.apply(lambda i: i.split())
tokenizer.fit_on_texts(tokens)
tokenizer.word_index = {word:index for word, index in tokenizer.word_index.items() if index <= 50000}

It's getting a bit confusing. Can you show how the code should actually look?

lyeoni commented 5 years ago

@davidniki02 ,

In your code, tokenizer is initialized/fit every time. But, fit_on_texts function should be called for all corpus that you have. (reference link: http://faroit.com/keras-docs/1.2.2/preprocessing/text/)

And, you don't have to use MosesTokenizer because Keras Tokenizer work enough well. Without MosesTokenizer, just use following code. (tokenized_corpus could be the original corpus, not pre-processed)

def token_to_index(self, tokenized_corpus, maximum_word_num):
   tokenizer = Tokenizer(num_words = maximum_word_num+1, oov_token='UNK')

   # tokenizer fitting (token to index number)
   tokens = tokenized_corpus.apply(lambda i: i.split())
   tokenizer.fit_on_texts(tokens)

   # build vocabulary
   tokenizer.word_index = {word:index for word, index in tokenizer.word_index.items() if index <= maximum_word_num}
   vocabulary = tokenizer.word_index

   # texts_to_sequences changes words into indexes
   return vocabulary, tokenizer.texts_to_sequences(tokens)
davidniki02 commented 5 years ago

Thanks @lyeoni , that is exactly the part I don't get: How do I use the tokenizer on a new text (e.g. passed from the command prompt)? The code from data_loader.py loads the corpus and fits the tokenizer, and I altered the code a bit to store the tokenizer for later access: self.tokenizer = tokenizer

But using the tokenizer to transform a new text (hence the text = input("Say something: ") code) into something the model would understand is the troublesome part. Can you show a sample for that, please?

davidniki02 commented 5 years ago

@lyeoni, I think I got it right this time:

from keras.models import load_model
from nltk.tokenize.moses import MosesTokenizer
import keras.preprocessing.text as kpt
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer, text_to_word_sequence
import numpy as np
import pandas as pd
import pickle
import data_loader

# load model architecture
from keras.models import model_from_json

# loading
model = load_model('model_20190519141343.h5')

handle = open('tokenizer.pickle', 'rb')
tk = pickle.load(handle)

loader = data_loader.DataLoader("corpus.tk.txt", "corpus.tk.vec.txt", "News_Category_Dataset_v2.json")
loader.load_cat()
print(loader.category_dict)
while 1:
    text = input("Say something: ")

    if len(text) == 0:
        break

    seq = tk.texts_to_sequences([text.strip()])
    x = pad_sequences(seq, 50)
    print(x)
    prediction = model.predict(x)
    print(prediction)
    cls = np.argmax(prediction)

    print(cls)
    print(loader.category_dict[cls])

That being said, it seems the predictions get really off sometimes. I have trained them on the headlines which yields a higher accuracy than summaries (80%) but when tested against something like "Facebook Accused Of Reading Texts And Accessing Microphones In Lawsuit" (which is even in the News dataset) the answer is "POLITICS"

What results do you get? How accurate is the model?