flairNLP / flair

A very simple framework for state-of-the-art Natural Language Processing (NLP)
https://flairnlp.github.io/flair/
Other
13.93k stars 2.1k forks source link

Slower vector extraction using DocumentEmbeddings #915

Closed saj1919 closed 4 years ago

saj1919 commented 5 years ago

Hi, I am trying to see what is the performance of DocumentRNNEmbeddings in terms of speed and accuracy. For now let's just focus on speed.

I am running following code

embedding_map = {
                    "w2v_embedding": [w2v_embedding], "glove_embedding": [glove_embedding], 
                    "flair_embedding_backward": [flair_embedding_backward], 
                    "flair_embedding_forward": [flair_embedding_forward],
                    "bert_embedding" : [bert_embedding], "elmo_embedding": [elmo_embedding],
                    "mixed_embeddings_1":[w2v_embedding, glove_embedding],
                    "mixed_embeddings_2":[w2v_embedding, glove_embedding, bert_embedding],
                    "mixed_embeddings_3":[w2v_embedding, glove_embedding, elmo_embedding],
                    "mixed_embeddings_4":[w2v_embedding, glove_embedding, flair_embedding_backward, flair_embedding_forward],
                }

for embedding_name, embedding in embedding_map.items():
    document_embeddings = DocumentRNNEmbeddings(embedding,
                                                hidden_size=64, rnn_layers=2, rnn_type='GRU', 
                                                bidirectional=True, dropout=0.1)
    start = datetime.now()
    sentence = Sentence("""fantasy sports fantasy cricket tips for middlesex v essex english t20 blast this article 
                        contains dream11 tips for middlesex v essex game overview middlesex will take on essex in 
                        world cup game on july 18 at lord tfg fantasy sports brings you tips and tricks for this 
                        fantasy round middlesex abd will be playing for middlesex this season and will also be 
                        featuring in this game along with abd d malan will be a key bat and will lead the team t 
                        helm finn and r jones will share the experience in the bowling mujeeb is also a part of 
                        this team he will lead spin duties and could also open the batting p stirling is sure to 
                        open the batting as he has done in the past for this team essex c delport is going to play 
                        this game and could also open the batting m amir has been signed to play for essex he will 
                        be opening the bowling bopara skipper harmer will also lend value in the middle order zampa 
                        could be picked too as a leggie""")
    document_embeddings.embed(sentence)
    end = datetime.now()
    print("%30s\t%s" % (embedding_name, end-start))

And output is as below ->

                 w2v_embedding  0:00:00.085395
               glove_embedding  0:00:00.065417
      flair_embedding_backward  0:00:03.837845
       flair_embedding_forward  0:00:03.857098
                bert_embedding  0:00:00.341363
                elmo_embedding  0:00:03.597545
            mixed_embeddings_1  0:00:00.122546
            mixed_embeddings_2  0:00:00.393088
            mixed_embeddings_3  0:00:03.447110
            mixed_embeddings_4  0:00:07.567270

As can be seen from output is ... flair and elmo embeddings perform slower. Just wanted to understand is there any particular reason for the same or is there any way to make it faster. Currently, all the embeddings initializations are using default parameters.

Speed is important in my case as I have to generate vectors for >50 million documents.

alanakbik commented 5 years ago

Hello @saj1919 if you have a GPU you can significantly increase speed by using mini-batching, i.e. always passing lists of sentences through the embeddings. We often use a mini-batch size of 32 for this, i.e. we call .embed() over a List containing 32 Sentence objects.

Aside from this, Flair and ELMo will always be slower since these embeddings are produced on-the-fly by RNNs, whereas other embeddings are simple lookups from pre-computed embedding vector lists.

Another thing to note is that DocumentRNNEmbeddings must be trained on a task to make sense. If you just initialize like in the above code snippet, the RNN is randomly initialized so the document embeddings will not make sense.

saj1919 commented 5 years ago

Thanks @alanakbik. This answer clarified almost everything for me. I just have to device some logical downstream task to get document vectors afterwords from that model.

Although Flair and ELMo provide good contextual vectors ... it would be very difficult in production/online systems to go ahead with them with such performance. Maybe good for offline jobs but not online !!!

Closing the issue ... thanks again.

saj1919 commented 5 years ago

Trying to get vectors for total 600 sentences using BertEmbeddings to benchmark cpu vs gpu performance. As you explained passing multiple sentences in embed() function.

On GPU, p2.xlarge k-80 11gb vram

1 sentence in embed() : 148.076917 sec
2 sentences in embed() : 169.554963 sec
4 sentences in embed() : 223.345672 sec

Can't go beyond 6 cores due to cuda memory issues

On 36 cpu cores compute extensive instance c5.9xlarge 2 sentences in embed() : 61 sec

Code is same for both the implementation as given above. There is no special gain using GPU ... actually gpu run is lot slower. Is there any proper explanation for this ?? Let me know.

saj1919 commented 5 years ago

Interestingly, if I go with only 1 sentence in embed() and run 4 or 6 scripts in parallel I was able to complete 600 sentences in ~60 seconds. Each Bert run takes up ~1.8gb of vram.

Either I am using weak GPU (here k80) but performance is not upto the mark. And by looks of the result sentences in embed() are not being processed in parallel ... or I may be wrong.

Only positive side in this is .. p2.xlarge is cheaper than c5.9xlarge !!!

In case someone wants to reproduce... here is the code

import pandas as pd
from datetime import datetime
from tqdm import tqdm
import numpy as np
from flair.embeddings import BertEmbeddings
from flair.embeddings import Sentence, DocumentPoolEmbeddings
from pytorch_pretrained_bert import BertTokenizer
import warnings
warnings.filterwarnings('ignore')

limited_stopwords_set = set(['ourselves', 'hers', 'between', 'yourself', 'but', 'again', 'there', 'about', 'once', 'during', 'out', 'very', 'having', 'with', 'they', 'own', 'an', 'be', 'some', 'for', 'do', 'its', 'yours', 'such', 'into', 'of', 'itself', 'other', 'off', 'is', 's', 'am', 'or', 'who', 'as', 'from', 'him', 'each', 'the', 'themselves', 'until', 'below', 'are', 'we', 'these', 'your', 'his', 'through', 'me', 'were', 'her', 'more', 'himself', 'this', 'down', 'should', 'our', 'their', 'while', 'above', 'both', 'up', 'to', 'ours', 'had', 'she', 'all', 'no', 'when', 'at', 'any', 'before', 'them', 'same', 'and', 'been', 'have', 'in', 'will', 'on', 'does', 'yourselves', 'then', 'that', 'because', 'what', 'over', 'why', 'so', 'can', 'did', 'now', 'under', 'he', 'you', 'herself', 'has', 'just', 'where', 'too', 'only', 'myself', 'which', 'those', 'i', 'after', 'few', 'whom', 't', 'being', 'if', 'theirs', 'my', 'against', 'a', 'by', 'doing', 'it', 'how', 'further', 'was', 'here', 'than'])
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
bert_multi_embedding = BertEmbeddings('bert-base-multilingual-cased', layers="-2")
curr_embedding_name, curr_embedding = "bert_multi_embedding", bert_multi_embedding
document_embeddings = DocumentPoolEmbeddings([curr_embedding], fine_tune_mode="nonlinear", pooling="mean")

def get_sentence_embedding(text=""):
    sentence = Sentence("""%s""" % text)
    document_embeddings.embed(sentence)
    sentence_embedding = sentence.get_embedding().detach().cpu().numpy()
    return sentence_embedding

def get_bert_token_limited_sentence(text=""):
    text_arr = [x.strip() for x in text.split() if x.strip() not in limited_stopwords_set]
    text = " ".join(text_arr)
    chars_to_remove = ["-", "?", "\'", "\"", ";", ":", "_", ",", ".", "!"]
    for c2r in chars_to_remove:
        text = text.replace(c2r, "")
    tokenized_text_arr = bert_tokenizer.tokenize(text.strip())
    tokenized_text_arr = tokenized_text_arr[:500]
    tokenized_text = " ".join(tokenized_text_arr)
    tokenized_text = tokenized_text.replace(" ##", "")
    return tokenized_text.strip()

text_data = """Neural machine translation (NMT) is an approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model. They require only a fraction of the memory needed by traditional statistical machine translation (SMT) models. Furthermore, unlike conventional translation systems, all parts of the neural translation model are trained jointly (end-to-end) to maximize the translation performance. Deep learning applications appeared first in speech recognition in the 1990s. The first scientific paper on using neural networks in machine translation appeared in 2014, followed by a lot of advances in the following few years. (Large-vocabulary NMT, application to Image captioning, Subword-NMT, Multilingual NMT, Multi-Source NMT, Character-dec NMT, Zero-Resource NMT, Google, Fully Character-NMT, Zero-Shot NMT in 2017) In 2015 there was the first appearance of a NMT system in a public machine translation competition (OpenMT'15). WMT'15 also for the first time had a NMT contender; the following year it already had 90% of NMT systems among its winners. The popularity of NMT also owes to the events such as the introducing of NMT section (NMT and Neural MT Training of Annual WMT (Workshop of Machine Translation), and the first independent workshop on NMT by Google which continued afterwards each year. NMT departs from phrase-based statistical approaches that use separately engineered subcomponents. Neural machine translation (NMT) is not a drastic step beyond what has been traditionally done in statistical machine translation (SMT). Its main departure is the use of vector representations ("embeddings", "continuous space representations") for words and internal states. The structure of the models is simpler than phrase-based models. There is no separate language model, translation model, and reordering model, but just a single sequence model that predicts one word at a time. However, this sequence prediction is conditioned on the entire source sentence and the entire already produced target sequence. NMT models use deep learning and representation learning. The word sequence modeling was at first typically done using a recurrent neural network (RNN). A bidirectional recurrent neural network, known as an encoder, is used by the neural network to encode a source sentence for a second RNN, known as a decoder, that is used to predict words in the target language. Convolutional Neural Networks (Convnets) are in principle somewhat better for long continuous sequences, but were initially not used due to several weaknesses that were successfully compensated for by 2017 by using so-called "attention"-based approaches. There are further Coverage Models addressing the issues in traditional attention mechanism, such as ignoring of past alignment information leading to over-translation and under-translation."""
len_arr = []
time_arr = []
for i in tqdm(range(1,100)):
    start = datetime.now()
    tokenized_text = get_bert_token_limited_sentence(text_data)
    text_vector = get_sentence_embedding(tokenized_text)
    end = datetime.now()
    len_arr.append(i)
    time_arr.append((end-start).total_seconds())
print("%s %9.6f" % (curr_embedding_name, np.sum(time_arr)))
stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.