Closed queirozfcom closed 7 years ago
While it is true that the Word2vec model is not deep in and of itself, the word vectors derived from it act as the building blocks of various deep learning models (especially those dealing with natural language). Besides, the understanding of the variations of word2vec algorithms (GloVe, fastText etc.) and their advantages over each other is not only pedagogical, but also aids in the choice of pretrained word embeddings.
Further, since this is a roadmap to deep learning and not merely a collection of research papers in deep learning, word2vec's presence in the roadmap is justified.
@saurabhmathur96 Sure, I agree. =D
Maybe just making a note of this fact on the home page would be good, but it's your call.
I'll close this then.
Sorry but I must say that Word2Vec shouldn'tbe here.
Word2Vec is a shallow (1-hidden layer) linear neural-network.
In fact, the very premise of Word2Vec is trading off model complexity (many layers, nonlinear activation) with a much simpler model that is faster to train with much larger datasets.
Omar Levy is one of the main researchers in word embeddings. See the first answer here: Quora: How does Word2Vec work?