MartinoMensio / spacy-universal-sentence-encoder

Google USE (Universal Sentence Encoder) for spaCy
MIT License
176 stars 12 forks source link

Accessing Document vectors and computing similarity is too slow without batching #18

Open ATAboukhadra opened 3 years ago

ATAboukhadra commented 3 years ago

Hi,

I use this library and other Spacy models to create Doc objects. I use the pipe() method to apply this to a large corpus of text. The main challenge is that accessing the vector of each document is too slow. Is there a way to get only the vector from applying the model on the text? Or extract the vectors in batches as well? This problem was raised from the problem of similarity where I couldn't also use the method similarity() on batches but only 1 by 1. Is there a way to compute similarity in batches?

I'm using a 4-core CPU.

I hope my question is clear.

Thanks.

repodiac commented 3 years ago

Hi, imho the spaCy way of dealing with separate documents is sort of "in the way". I do not recall a way to handle batches of spaCy docs!?

I come from another direction, I have a huge number of computed embeddings from USE and would like to input them to a spaCy pipeline. I think this is way easier also for your case, to eventually overwrite or add the USE embedding as "vector" hook to the respective doc object.

Of course, would be great if @ATAboukhadra could assist here and extend his plugin via a convenience method, maybe.