Open DIGist opened 22 hours ago
@DIGist Thank you for filing this issue.
The --local_llm
flag is on its way to being deprecated, and shouldn't be used, since the equivalent functionality can be achieved through the GUI.
the -api
flag should only be used when you're using the program only through the CLI, otherwise you have to manually specific it from the dropdown. I am planning on adding a 'default_api' to the config.txt file, but it's not currently in yet.
Regarding the error, is this occurring immediately after application start, or after you've attempted to perform some action within the GUI?
I've gone ahead and refactored the Ollama_tab so it now does lazy loading, which should fix that issue? (I think?)
Please do a clone/update the local copy and let me know if it works.
(Thanks again for filing this, really appreciate it)
Are You on the Latest version? You did a git pull and are running the latest version/build? Y
Please describe the bug attempting to connect to a network instance of ollama, it still seems to be trying to control the llama instance through ollama cli config.txt: ollama_api_IP = http://myserver:33821/v1/chat/completions ollama_api_key = 411 ollama_model = qwen2.5
running: python summarize.py -gui -api ollama --local_llm tried with and without the --local_llm switch
log:
Is the bug reproducable reliably? Yes
What was the expected behavior? connect to a networked ollama instance