michaelfeil / infinity

Infinity is a high-throughput, low-latency REST API for serving text-embeddings, reranking models and clip
https://michaelfeil.github.io/infinity/
MIT License
1.31k stars 96 forks source link

Tensor-parallelism for multi-gpu support #213

Open SalomonKisters opened 5 months ago

SalomonKisters commented 5 months ago

Feature request

Being able to split models into multiple GPUs, as with vllm/aphrodite engine for LLMs.

Motivation

It would be extremely helpful to be able to split larger models into multiple GPUs. Also, without TP, one GPU loses lots of vram and the other does not, making it impossible to use tensor parallelism on another program at the same time. (Without losing as much VRAM on the non-utilized GPU)

Your contribution

communicating the feature

michaelfeil commented 4 months ago

You typically do data-parallel style inference on sentence-transformers. TP is used when one GPU can't handle the desired batch size or the model at all. Unless there are some compelling benchmarks for bert-base, there is no need for tensor parallelism.