-
When launched with `--port [port]` argument, the port number is ignored and the default port 5001 is used instead:
```text
$ ./koboldcpp.exe --port 9000 --stream
[omitted]
Starting Kobold HTTP Se…
-
Hi, I use an android device to run koboldcpp. Blas is working as expected, even with the new redpajama models.
However I am testing RWKV-4-Raven-3B-v11-Eng99-Other1-20230425-ctx4096-ggml-q5_1.bin
…
-
Failed to load q4_2 model from here -> https://huggingface.co/eachadea/ggml-vicuna-13b-1.1/tree/main
```
D:\Projects\REPOS\CPU Text Generation>koboldcpp --useclblast 0 0 --smartcontext --stream
W…
-
Using CLBlast on some devices (like my Radeon 5700XT) the program defaults to device id 0 and platform id 0, that causes the following error:
```
Initializing CLBlast (First Run)...
Attempting to u…
-
### Describe the bug
Currently limited to basically using this repo for its useful scripts to just download models.
Doesnt seem like this works very well off the bat with ggml models. Tried a n…
-
# Expected Behavior
When quantizing with llama.cpp, the quantization version should be written to the `ftype` in the hyperparameters.
# Current Behavior
A `ftype` is produced by `llama_model_…
-
-
On the latest git version on Linux the program refuses to load older ggml models like vicuna or gpt-x-alpaca.
The program is linked to OpenBLAS.
Loading model: /home/daniandtheweb/Applications/cha…
-
Hi guys love the work.
I have been testing `TheBloke/minotaur-15B-GGML` ans is pretty solid you can test TheBloke/minotaur-15B-GPTQ
-
This thread is dedicated to discussing the setup of the webui on Intel Arc GPUs.
You are welcome to ask questions as well as share your experiences, tips, and insights to make the process easier fo…