sgomez / ollama-ai-provider

Vercel AI Provider for running LLMs locally using Ollama
https://www.npmjs.com/package/ollama-ai-provider
Other
150 stars 18 forks source link

qwen2:7b tool invocation #26

Closed whisper-bye closed 1 month ago

whisper-bye commented 2 months ago

Is your feature request related to a problem? Please describe. I'm experiencing issues when using the ollama-ai-provider with ai-sdk to call the newly supported qwen2:7b tool in Ollama. Despite proper integration, the tool does not return the correct results as expected.

Describe the solution you'd like I would like the ollama-ai-provider to properly support and return the correct results when calling qwen2:7b tools via the ai-sdk.

Additional context https://ollama.com/library/qwen2:7b/blobs/77c91b422cc9

user: 
现在几点了
assistant: 
<tool_call>
{"name": "getTimeInformation", "arguments": {}}
</tool_call>
sgomez commented 2 months ago

Sorry, but I need more information. Can u share a tiny repository to check what is happening? Please, notice than only generate-text and generate-stream support tool-calling natively by ollama. Tool-calling using streams is a experimental feature of this provider than cannot work with all models.

Anyway, I test the model with examples/ai-core/src/generate-text/ollama-tool-call.ts example and it works.

So, probably there are nothing we can do, because the tool response does not depends of the provider but the ollama engine.

whisper-bye commented 1 month ago

I think this is related to the adaptation of the qwen2:7b model.

sgomez commented 1 month ago

Then I gonna close the issue. If you have more information related with the execution or a PoC to check there are a bug I can reopen it.