damian0815 / compel

A prompting enhancement library for transformers-type text embedding systems
MIT License
519 stars 47 forks source link

Paralism in generating embeddings #102

Open Daniel-SicSo-Edinburgh opened 1 month ago

Daniel-SicSo-Edinburgh commented 1 month ago

Hi,

I was looking over the docs, and batch inference was mentioned. I looked at the code, and it is not batch inference. It is sequential inference.

I was really hoping for batch inference because I have a lot of samples I need to embed, and doing them in parallel would save a lot of time. Maybe this can be added as a feature in the future.

For now, I would advise that a disclaimer should be added to the docs and warn users that it is sequential and thus slow.

aristotaloss commented 1 month ago

Perhaps one of the library's API methods would help in your case? I think I've seen ways to get more low-level control over the parsing/processing before it's being fed into the text encoder. Worth a shot maybe?