-
### Description
I carefully watched the video https://www.youtube.com/watch?v=XbjjPdYRM_k But the agent I created cannot chat in the AGiXT v1.3.122 Latest. Agent Interactions tab. I configured every…
-
When I try to access API endpoint (like with TavernAI) it throws this:
![kobold](https://i.imgur.com/CwP1TPT.png)
And on Tavern logs this:
![tavern](https://i.imgur.com/96tQSNW.png)
Same thing whe…
-
Hi,
First off, thanks for the OPENBLAS tip. That cuts down the initial prompt processing time by like 3-4x!
Was wondering if its possible to use the generate function as an api from another python f…
-
> **Warning**. Complete **all** the fields below. Otherwise your bug report will be **ignored**!
**Have you searched for similar [bugs](https://github.com/SillyTavern/SillyTavern/issues?q=)?**
Yes…
-
Latest commit, as of 5/28, 28f1196f65eb27a8f231fb83b38c91fb42a1a11a fails to build on a Apple Silicon Mac. Seems to be a deprecated dependency on stat64, that didn't exist in prior versions.
Revert…
-
Even though this was added as a feature here: https://github.com/LostRuins/koboldcpp/issues/38
Newline doesn't seem to be working as a stop token for me, as seen here:
> Input: {"n": 1, "max_con…
-
Is there a way to have the generation stop when the bot starts a new line? For example I have 200 tokens set, and even if I disable multiline responses, it will still generate an entire conversation w…
-
Subj.
-
Took a while to figure this out. Adding a string which outputs something like "Read the tooltip, dummy. Requires Kobold API..." to console, or having stop sequences be ignored if API authentication ca…
-
https://www.reddit.com/r/LocalLLaMA/comments/13fnyah/comment/jjxc7x2/
unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?
"Update models for llama.cpp May 12th …