Open tonyaw opened 4 months ago
Hey @tonyaw, this is a bit tricky in my opinion. I feel that it should return None
if there are no tool calls to be made rather than an empty list, []
. The finish_reason
being stop
indicates that it is not suggesting tool calls in this response.
What do you think?
@marklysze , I'm OK with both None and "[]" as long as it is aligned between agent framework(autogen) and LLM inference framework(vllm). :-) As it follows OpenAI API schema, may I ask if there is some detail requirement from API schema perspective?
I also opened a same ticket to vllm. Let's align with vllm team for an agreement. :-)
we integrated tools into vllm with function calling models. might be relevant: https://docs.rubra.ai/inference/vllm
@sanjay920, Thanks for your info!
Is there any possible way to bypass this?... This really gives me headache...
I can suggest a couple of approaches:
tool_calls
tool_calls
If someone wants to work on a PR that would help.
Describe the bug
From vllm v0.5.0, it starts to support a new feature "OpenAI tools support named functions": https://github.com/vllm-project/vllm/releases/tag/v0.5.0
After that, every message returned by vllm includes an empty "tools_call" list if user prompt doesn't intend to call a tool:
After autogen agent receives this message, it always adds a empty tool_calls list in its next message:
This causes vllm to return 400, and breaks the conversation:
Steps to reproduce
See description.
Model Used
Llama3 70B. It shall be a communication issue between vllm and autogen, and not related to LLM.
Expected Behavior
autogen can work with vllm v0.5.0 and later version with no problem.
Screenshots and logs
No response
Additional Information
No response