-
本地Ollama 如何获取API Key?
-
Hi,
I have two requests 😅:
1. Might you publish a docker image?
2. Is it able to use ollama api instead of openAI?
Thanks, great and amazing project!!!!
-
### Describe the bug
When accessing Bolt through a remote URL, Ollama models are not visible in the web UI, despite both services being individually accessible remotely. The models appear correctly…
-
### Describe the feature you'd like
I currently use Ollama via Open-WebUI in Docker. That provides a Ollama API but behind a per-user API key. That is more secure that having an open, unauthenticated…
-
### Issue
I am receiving the following error when attempting to use my local Llama instance of qwen2.5-coder-32b-instruct. (Ubuntu 24.04) When starting aider and pointing to my ollama instance with…
-
### Reference Issues
_No response_
### Summary
Currently, Ollama URL is hardcoded to `http://localhost:11434/api`
This becomes an issue when deploying Kotaemon in more advanced scenarios…
-
When trying to follow the introduction docs using GenAIScript with any Ollama models fails.
This seems to stem from two issues:
1. Making invalid calls to the ollama host, it is not providing the co…
-
### 🐛 Describe the bug
##First sample is for Qdrant server and ollama
import os
import subprocess
from collections import deque
from mem0 import Memory
# Configuration for Mem0 and Ollama
c…
-
Thank you for your work. I am having issues with an unstable connection when using the Ollama API. Is it possible to use your code with a local Ollama model?
-
### Describe the bug
I have attempted to set up the production environment using Docker, following the steps outlined in the README, as well as additional troubleshooting steps, but the setup consi…