Open yananchen1989 opened 7 months ago
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
Your current environment
hello, i follow your official documentation to use vllm. first is to start the server:
then call it
however, i always get null content
ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=None)
in the response.may I know if i miss something important ?
thanks.
How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.