Open alpayariyak opened 9 months ago
Hey @alpayariyak , great question, tei is a great project that started slightly later than this, and i like it (apart from its license).
bench-marking is pretty subjective, e.g. a single sentence - 10 token query is nothing you should we deploy typically Bert-large on Nvidia L4 instances. Sending batches of ~256 with 380 tokens each, the performance (batch throughput/latency) is likely the only metric you want to care about, since you need to serve under high load to get anything back for your money.
CPU is around 3x faster, when using infinity with optimum engine. Candle/torch is not that great at cpu inference, onnx has an edge here.
TEI round 2-5% faster on 0.55 requests per second on TEI vs 0.52 on infinity. You will need to choose the right image for this, and know that e.g. 89 compute capability is what you should go for on Nvidia L4.
The startup time is slightly faster / same order of magnitute. This is for the GPU image. For roberta large, its similar gap. Docker image of TEI is smaller - torch+cuda is a real heavy weight
Additional features that TEI misses:
@alpayariyak Invested like 4-5h on this and set up an extra doc: Can I please have your feedback on it? https://michaelfeil.eu/infinity/latest/benchmarking/
@alpayariyak Invested like 4-5h on this and set up an extra doc: Can I please have your feedback on it? https://michaelfeil.eu/infinity/latest/benchmarking/
The benchmark link seems dead, could you please repost ?
Fixed!
Your project is amazing ! :rocket:
I :heart: your LICENSE that is better respect the one of TEI (:-1:)
Have you ever though to add an API endpoint that can serve as well as TextSplitter ? It would replace the need to load in memory the same model for the text Chunker and the Embedder
@Jimmy-Newtron Can you open another issue for that?
Are the integrations into Langchain? What would be the expected usage? To count tokens?
The main goal would be to avoid loading in memory twice the same model
Are the integrations into Langchain?
Yes I suppose that a LangChain Integration would be required
What would be the expected usage? To count tokens?
To optimize the resources used (GPU, VRAM) it would be nice to have the Infinity server to be able to chunk long input sequences into smaller sentences that are fitting the window size of the chosen Embed model.
I have found an implementation of a similar concept in the AI21 Studio Text Segmentation that is already available into the LangChain Integrations
Here some source codes that may be of interest to conceive a solution:
great question, tei is a great project that started slightly later than this, and i like it (apart from its license).
https://github.com/huggingface/text-embeddings-inference/issues/232 https://github.com/huggingface/text-embeddings-inference/commit/3c385a4fdced6c526a3ef3ec340e343a2fa40196
Does this means that there will be a convergence of the 2 projects?
Hi,
Thank you for your amazing work!
We'd like to add an embedding template for users to deploy on RunPod, and we're deciding between Infinity and HF's Text Embedding Inference. How would you say Infinity compares, especially in performance?