Open byronvoorbach opened 6 months ago
Thank you for your contribution to Weaviate. This issue has not received any activity in a while and has therefore been marked as stale. Stale issues will eventually be autoclosed. This does not mean that we are ruling out to work on this issue, but it most likely has not been prioritized high enough in the last months. If you believe that this issue should remain open, please leave a short reply. This lets us know that the issue is not abandoned and acts as a reminder for our team to consider prioritizing this again. Please also consider if you can make a contribution to help with the solution of this issue. If you are willing to contribute, but don't know where to start, please leave a quick message and we'll try to help you. Thank you, The Weaviate Team
this seems like a worthwhile addition, particularly because tensorflow imposes a global lock on models; running things in a batch would likely improve throughput and lock contention (on text2vec-transformers)
We currently do not support modules sending vectorization requests to embedding providers in batch. Most of the embedding provider APIs support sending vectorization requests in batch. This currently has additional overhead and decreases performance, especially on large-scale imports.
This would require us to investigate the limits of API for batch size for each module and handle this individually for each module.