-
Currently, the Ollama connector implements its [own client](https://github.com/microsoft/semantic-kernel/tree/feature-connectors-ollama/dotnet/src/Connectors/Connectors.Ollama/Client).
Consider rep…
-
hi
i've install the local-packet-whisperer on a Ubuntu 22.04 server.
the UI is up, but after I successfully loaded the pcap file and start chatting in the UI the app crashes with a Connection err…
-
### How are you running AnythingLLM?
AnythingLLM desktop app
### What happened?
Loving this app, thank you for the great work! But so far not been able to get the Linux client to work
Installed …
-
**Describe the bug**
My Ollama server is running on a different server and I am unable to provide the Ollama base URL in the current code, since the URL is hard coded to _localhost:11434_
**To Re…
-
Hello,
I'm confident that a feature enabling multi-GPU optimization and batch management would be beneficial.
I may have made a mistake, as I couldn't effectively use the `ollama_num_parallel` …
-
As the title indicates, it's says I don't have openAI quota, but I'm not wanting to use openAI or anything else outside of my own system. I have a local OLLAMA, with many local models, which works wi…
-
I am trying ollama.Client to connect a remote server for chat.
server A: http://192.168.0.123:11434, ollama installed with docker, ollama-python v0.2.0
local machine: m1 max macbook pro, ollama …
-
The `_parse_host` method seems to be stripping off the end of URLs that Ollama may be proxied behind. For example, if I have a rule setup in Caddy/Nginx ro forward Ollama to http://localhost:8080/olla…
-
Current script directory: E:\waifu\Waifu-texto-ollama-xtts\
E:\waifu\Waifu-texto-ollama-xtts\xtts-venv\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaultin…
-