-
```
===> Building for koboldcpp-1.57.1
[ 1% 4/64] cd /usr/ports/misc/koboldcpp/work/koboldcpp-1.57.1 && /usr/local/bin/cmake -DMSVC= -DCMAKE_C_COMPILER_VERSION=16.0.6 -DCMAKE_C_COMPILER_ID=Clang -…
-
I am not able to find much on batching support. But it appears that the downstream llama.cpp supports it.
https://github.com/ggerganov/llama.cpp/issues/4372
Any plans to expose this feature in k…
sirmo updated
2 months ago
-
If I use koboldcpp's OpenAI API via SillyTavern or LibreChat, and then cancel the request via the stop buttons, more often than not, koboldcpp happily keeps generating new tokens until either the toke…
-
hello,
i'm using:
* koboldcpp `1c5e05e4771b183783e559dbcb46dfbb4bf1c275`
* FreeBSD 15.0/amd64, clang 18.1.5
* AMD RX 6800 XT, FreeBSD drm-kmod 6.1.69, Mesa 24.0.7
* `LLaMA2-13B-Tiefighter.Q4_…
-
```
In file included from /usr/ports/misc/koboldcpp/work/koboldcpp-1.57.1/gpttype_adapter.cpp:29:
/usr/ports/misc/koboldcpp/work/koboldcpp-1.57.1/./otherarch/rwkv_v3.cpp:1379:9: error: cannot initia…
-
llama_model_load: error loading model: check_tensor_dims: tensor 'blk.0.ffn_down.weight' not found
llama_load_model_from_file: failed to load model
fish: Job 1, './koboldcpp-linux-x64-nocuda_new' te…
-
Thank you for sharing, must install C disk? Is it better to make a one-click portable mobile version (like KOBoLDCPP, LM STUDIO,FARADAY)? They come with multiple environment dependencies, do not need …
-
### Environment
Self-Hosted (Bare Metal)
### System
Windows 10, Firefox 121, Chrome 121
### Version
SillyTavern 1.11.3
### Desktop Information
- Branch: release
### Describe the problem
Since…
-
can you please add the models you are using for testing multimodal and image generation (name and where to find). i tried different models, and i cant get it to work. even if i get the vision model t…
-
For reviewing confidential documents, we'd prefer to use our own APIs, either view oobabooga or koboldcpp. Is there the possibility of including options for using alternate APIs, including local or Cl…