Open daJuels opened 4 weeks ago
you have it on tool choice auto, the model might not know it's completed its task.
you have it on tool choice auto, the model might not know it's completed its task.
You think it is a model issue? Of course the workaround to remove the tool works for this scenario - but not with multiple tools.
LocalAI version: docker image: localai/localai:v2.22.0-aio-gpu-nvidia-cuda-11
Environment, CPU architecture, OS, and Version: docker on debian, intel i9, nvidia gpu
Describe the bug When using functions, AI stuck in a loop of function calls. It seems, as it does not understand the tool result. As documented everywhere after getting a [TOOL_RESULT], the model should process the result and answer to the user as assistant role and not run the same function call again and again...
I'm not sure if this is an issue of localai, the model or the chat template? I thought that maybe the tool_call_id is missing, so the model is not able to connect the tool result to the function call.
Any ideas?
To Reproduce use this api call with
v1/chat/completions
:The response is now the same function call again:
Expected behavior The response should be from an assistant role that processes the tool/function result.
Logs