-
### What is the issue?
After downloading and installing. Requires additional download of compiled Rocblas
Rocblas.dll overwrites the rocblas.dll that comes with the SDK, and puts rocblas.dll in the …
-
Thank you for the llama 3.2 vision integration!
I was using llama3.2-3b with ChatOllama(model="llama3.2:latest").with_structured_output() to get a structured response from the model and I was hopin…
-
### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API repeats `"role": "assistant"` in all returned chunks. This is different to OpenAI's API which just has…
-
Enhance RAGGENIE by integrating Ollama as a new LLM provider, enabling users to perform inferences with self-hosted language models.
**Task:**
- Develop an Ollama loader to facilitate inference g…
-
Long prompts/responses crash llama-server because "Deepseek2 does not support K-shift". For long prompts/responses, llama-server should return an error message or truncate the response, but instead, `…
99991 updated
8 hours ago
-
### What is the issue?
`ollama run gemma2:2b`
pulling manifest
Error: pull model manifest: Get "https://registry.ollama.ai/v2/library/gemma2/manifests/2b": write tcp [2601:19b:0:b8a0:915f:c8c:3de4…
-
### What happened?
When forcing llama.cpp to use "GPU + CUDA + VRAM + shared memory (UMA)", we noticed:
- High CPU load (even when only GPU should be used)
- Worse performance than using "CPU + RAM…
-
I'd like to have package `llm-ollama` in `nixhub.io` (which is `ollama` plugin for `llm`) so I can install it with existing `llm` package.
https://github.com/taketwo/llm-ollama
-
### 🥰 需求描述
Ollama 0.4.0 支持了 llama3.2-vision 模型,可以识别图片。https://ollama.com/blog/llama3.2-vision
目前尝试了在 LobeChat v1.28.4 中调用了 llama3.2-vision 模型,发现不能正确处理图片。
从日志可以看到相关请求体:
```json
{
"message…
-
I ried to register my ollama node into api_config.yaml
```
SERPER_API_KEY: null
OPENAI_API_KEY: null
ANTHROPIC_API_KEY: null
LOCAL_API_KEY: anykey
LOCAL_API_URL: http://127.0.0.1:11434
```
B…