Open gabrielhuang opened 4 months ago
In case it helps others, using max_tokens=1
solved the issue
Actually let me reopen this since it appears that we're supposed to specifically allow max_tokens=0
when echo=True
.
https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/protocol.py#L399
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
Your current environment
🐛 Describe the bug
I'm trying to get the logprobability of the last token (Yes or No) of the prompt.
In my client, I set:
However, as soon as i query the vLLM server, it crashes with the following message inthe logs