Open weigary opened 1 month ago
Hello This PR will help with your first point.
Regarding the second one, unfortunately TEI relies on this file for tokenization. We will update the Hub tag to make sure that these models are removed.
Thank you Olivier!
Hi HF folks,
We noticed that the deployment success rate of the TEI models with the HF
text-embedding-inference
container is very low for the recent top trending TEI models, in our deployment verification pipeline. We see only 23 out of a total of 500 were successful.Example deployment
We looked into the root cause of the failures, and we found the following two common failures:
1) Error: The
--pooling
arg is not set and we could not find a pooling configuration (1_Pooling/config.json
) for this model. My understanding is that the container expects a config file1_Pooling/config.json
in the repo, but the model owner failed to provide one. In this case, what is your suggestion should we do here?We tried adding a default value of
pooling=mean
as an environment variable to the container and we see a significant improvement of the deployment success rate. 23/500 -> 243/500.We know the pooling parameter is required for the TEI models. However, we are not quite sure if it is the correct way to add a default value for all the TEI models and have a concern if it has a negative impact on the model quality. Can you advise how should we do here?
2) Caused by: 0: request error: HTTP status client error (404 Not Found) for url (https://huggingface.co/facebook/MEXMA/resolve/main/tokenizer.json) 1: HTTP status client error (404 Not Found) for url (https://huggingface.co/facebook/MEXMA/resolve/main/tokenizer.json) My understanding is that the container expects a
tokenizer.json
file in the repo but the model owner failed to provide one.Looking on the model card, the model is a XLM-RoBERTa model, I guess it is the reason the repo does not have a tokenizer.json file? To use the model, the user also needs to load the tokenizer first
There are many models failed with the same error, can you advise if there is anything we can do here? we also have a concern that if we provide a default tokenizer will negatively impact on the model quality.
Thanks!