Closed shengyehchen closed 7 years ago
The solution is to give huge batches, which is a not perfect but good enough solution when you have all your sentences upfront. For the real time scenario, one could implement a server to serve those embeddings. We did not invest some time in this yet, but this would be a great asset to have.
Indeed, I am trying to use it in the real time scenario, so it would be wonderful if someone implements an interface which separates the loading model & the getting embeddings part!
this concerns a use-case and code part we fully share with fasttext. it would therefore be best to ask the fasttext community directly, or the python API there. please feel free to re-open this issue here once there is any update!
has been solved in #17
I followed the instructions in get_sentence_embeddings_from_pre-trained_models.ipynb to get embeddings of sentences, and I found that every time I call the method get_sentence_embeddings(), it reloads the parameters of the model once, making loading sentence embedding several times a time-consuming stuff.
Is there any solutions to make it more efficient?