-
Hello,
Tks for this fantastic implementation.
I'm wondering if it's possible to use sentences as training units because normally the window is put on the sentence right? If we use documents the …
-
Hi Eric,
Thanks for the excellent enhancement. I am trying to use your repo for incremental learning. I am getting a memory error while running the script. My machine has 32gb ram and I am able to l…
-
In my execute, I'm able to easily tell fastRtext to use bigrams, per the instructions/commands for fasttext:
```
execute(
commands = c(
"skipgram",
"-input",
tmp_file_txt,
"…
-
i was wondering if we can load these models with gensim or something similar. since the whole file is binary we need to know its format and more information. can you please provide more info on it ?
-
Hi, I read your code and found that there is something i don't understand. In the context prediction module(pretrain_contextpred.py), the negative context representative is obtained by cycle shifting …
-
```
Hi, I am new to word2vec. I am preparing corpus in sentences using wikipedia
dump. However the dump is pre-splitted in paragraphs which seems need to
further be processed into sentences.
My qu…
-
```
Hi, I am new to word2vec. I am preparing corpus in sentences using wikipedia
dump. However the dump is pre-splitted in paragraphs which seems need to
further be processed into sentences.
My qu…
-
```
Hi, I am new to word2vec. I am preparing corpus in sentences using wikipedia
dump. However the dump is pre-splitted in paragraphs which seems need to
further be processed into sentences.
My qu…
-
```
Hi, I am new to word2vec. I am preparing corpus in sentences using wikipedia
dump. However the dump is pre-splitted in paragraphs which seems need to
further be processed into sentences.
My qu…
-
```
Hi, I am new to word2vec. I am preparing corpus in sentences using wikipedia
dump. However the dump is pre-splitted in paragraphs which seems need to
further be processed into sentences.
My qu…