Closed utterances-bot closed 7 months ago
Thank you for this excellent overview of fasttext!, the visual aspect really helped. I hope you keep writing :)
P.S: I think the link for Armand Joulin et al., “Bag of Tricks for Efficient Text Classification” is https://arxiv.org/abs/1607.01759
@notes-ml Thank you for the catch. I'll fix the link.
Thank you for this detailed guide. Really helpful!
Great work and explanations! Thank you
Thank you for the tutorial, I noticed a difference between Facebook's implementation and gensim's. It turns out that gensim only supports unigrams while Facebook's supports different 'n-grams'.
Sir, thank you fo such a better explanation of word embedding. Sir, i have a confusion why you have taken 1st 2 and last 2 character in 3 grams and all other characters are exact as matching with the ngrams. In table word eating with length 3 (1st and last character) is taken as 2 (ea, ng). All other characters have taken right according to length
@sonia-simran The first character is actually "<ea" i.e. 3 characters instead of just "ea". The "<" denotes that this is the starting part of the word. It's the same case for the last ngram.
amazing! that was the first beginner-friendly explanation I saw on the net!
A Visual Guide to FastText Word Embeddings
A deep-dive into how FastText enriches word vectors with subword information
http://amitness.com/2020/06/fasttext-embeddings/