-
Hi! I would like to try to make RWKV v6 models working with ollama.
llama.cpp has it supported already.
- Currently ollama fails to load the model due to a bug in llama.cpp. Here's the fix PR: https…
-
Ran into issue running preprocessing with Rabies.
Running it locally. It skips the inhocorrection step altogether, but continues to run. I halted the processing, so the log file attached isn't comp…
geowk updated
9 months ago
-
Getting an error when trying to set the system message, code below:
`ollama._types.ResponseError: no FROM line for the model was specified`
```python
def ollama_chat_response(message, history, …
-
Just wanted you to also be aware that currently the output doesn't render properly with the latest SceneKit framework on iOS/iPadOS and VisionOS.
It is displaying correctly everywhere else other th…
-
Hello, cagdasbak, It's very nice of you to released your model, I am very intersted in it, and I try to
test with the model you provide, but caffe gives the error: **falied to prase Netparameter fil…
-
I tested this app a few weeks ago and it's an elegant proof of concept - thanks so much for sharing!
I have a specific system prompts I enable via a modelfile and then creating my own "custom model…
-
Context window size is largely manual right now – it can be specified via `{"options": {"num_ctx": 32768}}` in the API or via `PARAMETER num_ctx 32768` in the Modelfile. Otherwise the default value is…
-
EmbeddedResource with two dots in the file name does not work, meaning GetManifestResourceNames does not return anything.
Works:
```xml
```
Does not work:
```xml
…
-
## ❓ General Questions
I have surely installed tvm in my device which has an arm64 on it and I want to run mlc_llm on my device to do model inference. But when I installed mlc_llm on my device li…
-
I'm currently writing a webui for ollama but I find the API quite limited/cumbersome.
What is your vision/plan regarding it? Is it in a frozen state, or are you planning to improve it?
Here's som…