ahmedbesbes / overview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification

NLP tutorial
https://ahmedbesbes.com/overview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification.html
42 stars 20 forks source link
bag-of-words blog character-ngrams convolutional-neural-networks deeplearning glove-embeddings keras nlp recurrent-neural-networks sentiment-analysis sklearn text-classification tfidf tutorial tweets word-embeddings word-ngrams

Overview and benchmark of traditional and deep learning models in text classification

Original post: https://ahmedbesbes.com/overview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification.html

This article is an extension of a previous one I wrote when I was experimenting sentiment analysis on twitter data. Back in the time, I explored a simple model: a two-layer feed-forward neural network trained on keras. The input tweets were represented as document vectors resulting from a weighted average of the embeddings of the words composing the tweet.

The embedding I used was a word2vec model I trained from scratch on the corpus using gensim. The task was a binary classification and I was able with this setting to achieve 79% accuracy.

The goal of this post is to explore other NLP models trained on the same dataset and then benchmark their respective performance on a given test set.

We'll go through different models: from simple ones relying on a bag-of-word representation to a heavy machinery deploying convolutional/recurrent networks: We'll see if we'll score more than 79% accuracy!


Here are the models that have been tested:

By the end of this post, you will have a boilerplate code for each of these NLP techniques. It'll help you kickstart your NLP project and eventually achieve state-of-the art results (some of these models are really powerful).

Here's a sneak peak of the final result:

benchmark