-
I started a server with the command ` OLLAMA_NUM_PARALLEL=4 OLLAMA_MAX_LOADED_MODELS=4 ./ollama serve`. We open 4 terminals and executed the command` ./ollama run codellama after which the model loade…
-
### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
+] Running 3/0
⠿ Container private-gpt-ollama-cpu-1 Created …
-
At present, the agent API only supports the following models, and the ollama model is the most commonly used local large model. We hope to add an ollama agent. At the same time, we raise questions abo…
-
I want to use KoboldCpp or Ollama to provide translation services, believing that many LLM models can offer better results than existing online translation websites. What are the ways to achieve this?…
-
**Describe the bug**
I'm getting ' ERROR There was a problem with the ollama API request.' when trying to use mods with Ollama
This used to work with no issues till recently, I don't know if the…
-
### What is the issue?
After installing Ollama and attempting to run it, an error occurs. Upon checking the log file ~/.ollama/logs/server.log, the following content is found:
```
Couldn't find '…
-
ollama/ollama:latest
-
Error when using api chat with vscode's continue plugin
```
File "E:\open-webui\backend\python311\Lib\site-packages\ktransformers\server\api\ollama\completions.py", line 96, in chat
raise Not…
-
### Description
Could we add the feature of supporting ollama in Appflowy ? It has an API and could be a self-hosted and open source alternative to the OpenAI API
### Impact
an API and could be …
-
### Module
Ollama
### Testcontainers version
1.20.1
### Using the latest Testcontainers version?
Yes
### Host OS
Windows 11
### Host Arch
x86
### Docker version
Docker…