-
OS: Windows10
Two GPUs,3090 and 4090
Compared to Stable Diffusion with a GPU(4090) and "--precision full --no-half --xformers" , it's the same speed!
-
# Docs
* https://github.com/LostRuins/koboldcpp/wiki#can-i-use-ssl
* https://lite.koboldai.net/koboldcpp_api#/v1/post_v1_chat_completions
# What do
1. Ruby server gets translate command
- A…
-
i use a self compiled version of koboldcpp, the newest version
since i do not have AVX2 but i do have AVX1 i select vulkan old cpu, instead of showing me my nvidia quadro card it shows me ,,lvmpip…
-
Hi,
It would be really sweet if we could load a json/card directly from command line, for instance:
`koboldcpp.exe llmg.bin --stream --launch --load myIncredibleStory.json`
Thanks for all your …
-
## Motivation
As the SHARK Studio application and its scope grows, we notice an opportunity to rework the general code structure of the application from a stable diffusion webui to a multi-modal g…
-
The 1.58 .exe for ROCM doesn't seem to be building correctly when run:
Running on a 6600 XT to test the new compatibility update, unsure is this is a pytorch issue or something else
-
Attempting to run this using my nvidia RTX 2070 Super (Turing architecture), which can run other stuff like stable-diffusion-webui or koboldcpp just fine...
```
> python server.py
Blocksparse is …
-
Hello!
I had the issue, that after the Llama installation Llama didn't respond to any input.Alpaca worked fine. I found out, that i had to copy the three files from C:\Users\USERNAME\dalai\llama\bu…
-
Currently, when I want to use OpenAI-like mock servers or proxy servers, there's no apparent way to manually modify the openai.api_base and add headers to openai Completion/ChatCompletion request.
…
-
**Describe the Issue**
Hello, everyone. I would like to raise a question also help regarding to my koboldcpp that somehow slows down everytime the program minimized or run before my browser.
I use…