Closed pesc101 closed 3 weeks ago
You are trying to conduct batch inference using the OpenAI client which connects to the online server.
For offline batch inference via OpenAI API, you should instead use vllm/entrypoints/openai/run_batch.py
.
Okay, but how can I integrate that with my docker compose, because the docker image always uses api_server.py. So I have to create a new docker image for that or does vllm provides one?
A simpler way would be to run the vLLM docker container as is, then open a new interactive shell inside it and run any commands you want.
Well, I have tried to run it inside of the docker and it is not working properly.
The run_batch.py
endpoint will start a new model instance and will not use the existing api_server and that is not useful.
Also, that is very far away from an integration into the OpenAI Python SDK because I do not want to execute something inside of the docker all the time
The
run_batch.py
endpoint will start a new model instance and will not use the existing api_server and that is not useful. Also, that is very far away from an integration into the OpenAI Python SDK because I do not want to execute something inside of the docker all the time
Yes, this is why it is called offline inference. Feel free to open an issue to request for online support.
Sure, I will open an issue for that. Thanks for your support!
If you intend to only run batch inference inside Docker, what you can do is modify the image to run something like sleep infinity
instead of vllm serve
so that no server is started while keeping the container alive, then run offline batch inference manually.
Your current environment
How would you like to use vllm
I want to use the OpenAI library to do offline inference on my local vllm Model. I use this compose.yml to create an api-server using vllm.
When I try to use batch API endpoint like this, I get for both create calls
NotFoundError: Error code: 404 - {'detail': 'Not Found'}
. The test.jsonl file has the same format as in this tutorial: https://platform.openai.com/docs/guides/batch/getting-started?lang=curl, except that the name of the model is aligned with the correct model name. I would assume that there is a problem with the endpoints using vllm as backend. Is it possible to use them or initialize them?Before submitting a new issue...