-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Branch name
main
### Commit ID
265a7a283ae1a29a4827d2a973d62cad14c41dc5
### Other envir…
-
### Extension
https://www.raycast.com/massimiliano_pasquini/raycast-ollama
### Raycast Version
1.83.1
### macOS Version
_No response_
### Description
#### Error:
```
Error: Worker terminated …
-
### Is your feature request related to a problem? Please describe
i see latest nightly has pull and list available like ollama - awesome.
allows me to use ollama list/pull.
any chance to trigge…
-
I'm running the server normally with: python -m routellm.openai_server --routers mf --weak-model ollama_chat/codeqwen
and am getting this whenever I attempt a prompt:
File "C:\Users\nate\AppData…
-
**Describe the bug**
Try using twinny with deepseek-coder-v2:16b via ollama, but both chat and FIM don't seem to work due to the following errors seen in the ollama logs:
```
[GIN] 2024/09/13 - 1…
-
ERROR [nuxt] [request error] [unhandled] [500] Chroma getOrCreateCollection error: Error: TypeError: fetch failed
at Chroma.ensureCollection (/E:/gitINe/chat-ollama/node_modules/.pnpm/@langchain+…
-
I have ollama-python running with a custom ollama model. It works very well except that it does not remember the conversation at all. Every chat is like a new conversation.
I checked issues and I c…
-
sometimes, while using nano-graphrag, the progress stuck while doing Entity Extraction process.
while stuck, the output of terminal looks like below:
```
"Processed 26 chunks, 378 entities found…
-
hi, usually ollama runs on local machine together with anythingllm so the url can be http://localhost:11434.
I have hosted ollama on a server and secured it.
now i can access it https://ollama.somes…
-
Hi
I just tested to see if it can find the default shortcut for Page Assist in the Github repo. It wasn't able to achieve the task in the web interaction mode. I tested it with Llama3.1 and Gemma l…