Closed julianlanger closed 5 years ago
Hi @julianlanger! Thank you for your feedback! I have uploaded a benchmark to the development branch. What currently takes long (about ~2s) is the induction of word frequencies.
This is a step you only have to perform once when you are using SIF or uSIF embeddings. If you just compute average embeddings, you can drop lang_freq="en" and it will take only a few ms.
Actually, I have found a small bug in the Fasttext implementation due to you request. Thank you.
If you have further questions, feel free to ask.
Thank you for your quick reply! I am just wondering what this implies for my workflow. I have a very large document collection (4.5 million), each with a moderate amount of sentences (200 to 250). Should I batch them into large sentence collections for the training and then do inference by document afterwards?
I have not yet have to work with this kind of data, although I have thought of this as a feature. I guess the fastest way would be to work with a large sentence collection and map each sentence index to a document id separately or, more easily, a document id to a tuple of indices (lo, hi), where lo represents the overall index of the first sentence in the document (assuming you stored the large-sentences in order).
Hi, I am currently trying out your algorithm and I was wondering what speeds you achieve. On my machine (MacBook Pro), training on 200 sentences takes roughly 3 seconds. Is this normal or do you think there is something wrong? Your help would be much appreciated!