-
### What is the issue?
I custom compile Ollama for AMD. I change the version file to read the custom version such as `0.1.47-amd`.
Yet, when I run `ollama -v ` within the compiled directory (I em…
-
Currently, the Ollama connector implements its [own client](https://github.com/microsoft/semantic-kernel/tree/feature-connectors-ollama/dotnet/src/Connectors/Connectors.Ollama/Client).
Consider rep…
-
Hi, somehow I get no traces if using Langchain' Ollama, but all seems fine with raw Ollama package:
```
from ollama import Client
import openlit
openlit.init( otlp_endpoint="http://127.0.0.…
-
I am using node of "Griptape Create: Agent". It works with default config (OpenAI), but fails when connected to node "Griptape Agent Config: Ollama".
I am on Windows 11, and have installed Ollama …
-
**Describe the bug**
My Ollama server is running on a different server and I am unable to provide the Ollama base URL in the current code, since the URL is hard coded to _localhost:11434_
**To Re…
-
Hello,
I'm confident that a feature enabling multi-GPU optimization and batch management would be beneficial.
I may have made a mistake, as I couldn't effectively use the `ollama_num_parallel` …
-
Hi,
When I want to run phi3 I get
Error: llama runner process no longer running: -1
Newest version installed with
curl -fsSL https://ollama.com/install.sh | sh
I try ollama --version :
o…
-
As the title indicates, it's says I don't have openAI quota, but I'm not wanting to use openAI or anything else outside of my own system. I have a local OLLAMA, with many local models, which works wi…
-
I am trying ollama.Client to connect a remote server for chat.
server A: http://192.168.0.123:11434, ollama installed with docker, ollama-python v0.2.0
local machine: m1 max macbook pro, ollama …
-
Current script directory: E:\waifu\Waifu-texto-ollama-xtts\
E:\waifu\Waifu-texto-ollama-xtts\xtts-venv\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaultin…