Open Asher9971 opened 1 month ago
Hey, are you using ollama?
(And can you maybe include the script you're using below / which version)
Yes, i'm using ollama i tried it again with your updated version of the smart.py Now it's gone. I don't know if this was the fix for the error or yesterday something was wrong with the ollama instance.
But i've another problem. Everytime a tool should be used i just get the tool-call as response.
It's the same with my image_gen, web_search and all other tools. I'm not using your tools because i want to use my searxng and built-in image gen (stablediffusion)
But when i use these tools with my stock llama3.1:8b model it works without problems.
For image generation i use this https://openwebui.com/t/justinrahb/image_gen
also tried with 70b and 8b model
using with llama3.1 directly it is working
I'm using the latest smart.py from your repo
Maybe because your function want's to call "generate_image" and not "image_gen"?
Hey, are you using this script? https://openwebui.com/f/moophlo/multi_agent_reasoning_for_ollama/
I need to know which script you're actually using? Or are you using the generic one? If you use the one under the link, I didn't actually make that, it's just based on my script. I have absolutely no experience with ollama so I can't really support it.
I tried both scripts, that you mentioned and all of yours. It's always the same behaviour. You can test it, by configuring the openwebui image settings and using this tool https://openwebui.com/t/justinrahb/image_gen
But the problem exists with every tool. I wrote my own confluence tool which works fine when i'm using it with llama3.1 but using it with your function i also just get this
I think this is an issue with how ollama does tool calling, not one on my end. It would need a ollama specific one probably, which I neither have the time nor knowledge to build.
You can test it, by configuring the openwebui image settings and using this tool https://openwebui.com/t/justinrahb/image_gen
All tools work fine for me, I don't see a point in testing this one specifically.
I tried both scripts, that you mentioned and all of yours.
Please just actually give me a link to to what you used and which versions. Ideally just paste the scripts you've tried in code blocks below...
Hey, are you using this script? https://openwebui.com/f/moophlo/multi_agent_reasoning_for_ollama/
I need to know which script you're actually using? Or are you using the generic one? If you use the one under the link, I didn't actually make that, it's just based on my script. I have absolutely no experience with ollama so I can't really support it.
I deleted that clone, yours is working perfectly also with ollama, I'm using it. It's just a matter of adjusting the variables. Ollama is almost fully compatible to OpenAI API, so it'll work out of the box.
And by the way you're doing an amazing job here, looking forward to see also the tools completed and added to the OpenWebUI Comminuty Tools
Ah okay! I'll take a look at it as soon as I have time to fix the tool calling issue.
Hi, do you know about the "load failed" error that appears when complex tasks take longer to complete?
Sometimes i try the same prompt again and then it works. For "easy" prompts like "translate..." or something i never get the load failed error message.
Maybe there is a default timeout somewhere? I use llama3.1:8b-instruct-q8_0 for small and medium and llama3.1:70b-instruct-q4_K_S for large and huge. Running on a RTX 6000 48GB