Closed NeoZhangJianyu closed 1 month ago
hi, @NeoZhangJianyu, I can't reproduce your issue with the 3rd party (huggingface) TEI docker image. We should always pull the latest image
docker run -p 8090:80 --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id BAAI/bge-base-en-v1.5
I will check again! Thank you!
verified, thanks
[code]
[info] I setup the ChatQnA example as the guide. When verify the service "ghcr.io/huggingface/text-embeddings-inference:cpu-1.5", it's fault, return error:
I find the docker container log with error:
After check, the model path is wrong: https://huggingface.co/BAAI/bge-base-en-v1.5/resolve/main/model.onnx.
Current correct path is: https://huggingface.co/BAAI/bge-base-en-v1.5/resolve/main/onnx/model.onnx
My question is how to avoid such issue comes from 3rd party component. Is it possible to fork them and maintained by OPEA separately?