-
### What happened?
I tried to run the tinyllama-1.1b model on a [OnePlus CPH2573](https://vulkan.gpuinfo.org/listreports.php?devicename=OnePlus+CPH2573&platform=android) (with Adreno™ 750). It works …
-
### What is the issue?
Using the amr64 build package and run it successfully
However when LLM answering the question the CPU load is 100% but the GPU is nearly 0 % in `jtop`
Is it normal or the amr…
-
Not urgent and not performance/correctness related, but rather stylistic: use consistent terminology between code and papers.
Examples: `color_class` --> `color`; the query `colors()` --> `color()`…
jermp updated
3 months ago
-
### What is the issue?
Running `ollama run smollm:135m`, or any other model, results in: _Error: no suitable llama servers found_.
I'm running Fedora Linux, previously Ollama 0.3.4, which worked. …
-
### Issue I am facing
(parameter) update: Update
Item "None" of "Optional[Message]" has no attribute "reply_html" mypy (error)
mypy keeps claiming error, even disabling Type Checking. Is this act…
-
### What is the issue?
Hi,
after some time of running my tests that send requests to Ollama's API /api/chat, I'm getting `/go/src/github.com/ollama/ollama/llm/llama.cpp/src/llama.cpp:17994: Deepsee…
-
Hi,
I received a weird error in a workflow that was previously working. Not sure if the "TODO: fix" mention is intended for comfyui, for llama_cpp, or for VLM nodes, but I figured I'd start here :). …
-
### Jan version
0.5.6
### Describe the Bug
When I try to use "Codestral 22B Q4", prompt give no response. The app.log says failed to load.
In the log it is trying to load:
'/Volumes/T7B01/ai/ja…
-
### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | ti…
-
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet)
Tensorflow
2. Framework version:
tf 1.14.0
3. Horovod version:
horovod/horovod:0.16.4-tf1.14.0-torch1.1.0-mxnet1.4.1-py3.6
4.…