Hi, I have 2 questions:
Firstly, I know in past I found documentation on how CharNGram embedding works (either somewhere in code or in the docs), but I cannot find it right now.
Secondly I would like to know if FastText uses precomputed word vectors for pretrained vocabulary,
or calling it within build_vocab constructs embeddings from pre-trained ngrams for training data vocabulary. In other words, does build_vocab handle out-of-pretrained_vocabulary words?
Hi, I have 2 questions: Firstly, I know in past I found documentation on how CharNGram embedding works (either somewhere in code or in the docs), but I cannot find it right now.
Secondly I would like to know if FastText uses precomputed word vectors for pretrained vocabulary, or calling it within build_vocab constructs embeddings from pre-trained ngrams for training data vocabulary. In other words, does build_vocab handle out-of-pretrained_vocabulary words?
Thank you for the information.