-
So following the recommended install method of cloning the client repository and running ./play-rocm.sh, the install goes smoothly then as soon as it starts to load kobold itself it immediately crashe…
-
It would be great if support for the SDXS-512 model could be added: https://github.com/IDKiro/sdxs
Especially for quickly generating images on the CPU, this is a major new development, so it would …
-
Hello guys ! I dont know if I can pose these questions...
So I want to know few things.
At first I work on a windows 11 computer. My setup is:
I5 10400F
16 go ram
RX6600XT
7B hf llama model
C…
-
[kobold_debug.json](https://github.com/henk717/KoboldAI/files/15272513/kobold_debug.json)
For some reason token streaming just does not work. It's enabled and the actual terminal output from the se…
-
-
OS is win11, I notice koboldcpp 1.64.1 has vulkan driver support, so I make a nice try with my AMD 6800U, 32GB ram, 3GB vram with GPU shared memory. Its total vram could be boosted to 17GB. It has vul…
-
[Mantella mod](https://github.com/art-from-the-machine/Mantella) introduces the possibility of talking to[ Skyrim NPCs](https://www.nexusmods.com/skyrimspecialedition/mods/98631), revolutionizing the…
-
I tried Qwen2-72B-Instruct with both this quantization: https://huggingface.co/bartowski/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q4_K_M.gguf
And this one: https://huggingface.co/mraderma…
-
A number of open source models like LLaMa 2 can run in local environments where there's a webserver (like LM Studio, Koboldcpp, etc.) that has identical endpoints to OpenAI. Can we have a flag/option …
-
I encountered the following error message after installing the entire package. I have carefully followed each step the guide have given. I also tried reinstalling it, but it still shows the same error…