huggingface / text-embeddings-inference

A blazing fast inference solution for text embeddings models
https://huggingface.co/docs/text-embeddings-inference/quick_tour
Apache License 2.0
2.6k stars 161 forks source link

Too many router/tokenizer threads #404

Closed askervin closed 17 hours ago

askervin commented 1 week ago

System Info

text-embeddings-router 1.5.0 from image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5

Information

Tasks

Reproduction

Review the code in router/src/lib.rs:

let tokenization_workers = tokenization_workers.unwrap_or_else(num_cpus::get_physical);

or inspect the number of threads of text-embeddings-router when running in a cgroup with limited cpuset.cpus on a host with many CPUs, or for instance, text-embeddings-interface:cpu-1.5 image in a container with restricted cpuset.cpus.

Expected behavior

The number of worker threads should not exceed the number of allowed CPUs for the process.

Containerized apps and services must not think they could use all CPUs, memory or other resources in the system, but only those available in their containers.

Currently the performance of text-embeddings-interface service is horribly bad when running on systems with lots of CPUs (256, for instance), and when the number of CPUs of its container is limited (down to 4, for instance).

eero-t commented 1 week ago

Related: https://github.com/huggingface/text-embeddings-inference/issues/405, https://github.com/huggingface/text-embeddings-inference/issues/170