-
**Description**
I have no idea how big my character file is or how big my prompt is. I do not know how many tokens are in the chat history or context. A count of what is getting sent would be nice …
-
**Description**
Not sure if this is possible or not, but it would be a good feature. I'd like to spin a server, serving the oobabooga text generation through the API and share it with friends, for …
-
For some reason, when TavernAI is using OpenAI on gpt-3.5-turbo it will occassionally loop the same context infinitely without generating any output. This burns through OpenAI budget VERY quickly, as …
-
### Describe the bug
When --no-stream is not passed, text-generation-webui always generate 0 tokens in 0.00 seconds when accessed by api.
### Is there an existing issue for this?
- [X] I have searc…
-
**Describe the bug**
The tokenization currently present in TavernAI is suboptimal and is leading to parts of prompts that should fit within a 2048 context being truncated.
**To Reproduce**
1. Loa…
-
KoboldCpp takes a while to generate, and it DOES eventually generate, but tavern gives up on listening well before the process finishes.
Is there a way to increase how long tavern waits?
-
When your conversation goes on for a long time. Eventually you start getting an error in the command prompt and your chatlog no longer saves any further messages.
Here's the full error message:
…
-
As title, after editing the config file, it launches without following any parameters that I have set in the config file, for example I set to autorun false and port number 9000, it just launches with…
-
C:\Users\joshx\Downloads\TavernAI-main>call npm install
[##################] | reify:exifreader: timing reifyNode:node_modules/uuid Completed in 530ms
-
I'd like to point `sd` to an existing stable diffusion server I have up, running automatic1111's webui API. It'd be nice if there's a module that can hand it off rather than running SD locally, especi…