Open Iliceth opened 2 months ago
I found a way to get it working, by editing the llm.py block file. That means I have to add every model I wish to run via ollama by hand each time I get another one, right?
This should be possible. Can you tell me more about the modifications you had to make
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
⚠️ Search for existing issues first ⚠️
Which Operating System are you using?
Linux
Which version of AutoGPT are you using?
Stable (branch)
What LLM Provider do you use?
Other (detail in issue)
Which area covers your issue best?
Other
What commit or version are you using?
2618d1d87cd04623c848df870800f328fe36bc83
Describe your issue.
The documentation clearly states the possibility to use ollama, via running a model in ollama, then starting the server, starting the builder and then select the last option in the model list in the blocks. The model list always stays the same though, whatever model I run in ollama and the last model in the list is llama3.1:405b which it says in the terminal I don't have, which is correct.
Does that mean that ollama running is not detected by AutoGPT? As the list does not show anything else whatever model I run. Or does the one referring to the model running in ollama not get renamed? Or do I miss something else possibly?
Upload Activity Log Content
No response
Upload Error Log Content
No response