-
### Motivation
Motivation
The min_p sampling parameter is becoming quite popular. It's conceptually simple and "makes sense", and (at least anecdotally, according to opinions of many model fine-tune…
-
**Description**
I would like to request an update to the Exllamav2 module to version 0.1.8. This version includes some various bug fixes like https://github.com/turboderp/exllamav2/issues/566
**…
-
**Have you searched for similar [bugs](https://github.com/SillyTavern/SillyTavern/issues?q=)?**
Yes
**Describe the question**
I have oobabooga up and running, everything works. --api is enabled
…
-
Stumbled across the repo and was interested in trying out the assistant concept with some beefed up local model settings.
My env is quite different than what's used here and my goal was to document…
-
I'm having an issue where I'm trying to run an example using a zero shot agent and a basic tool via your short_instruction example.
If I load in the OpenAI api as the LLM and run all the other co…
-
### Describe the bug
Ever since version v1.11 I've had an issue where the UI tried to save a folder that exceeded the filename limit of my OS.
My guess is that this bug could be have been introduc…
-
### Issue with current documentation:
I believe the Oobabooga Text Generation Web UI API was rewritten, causing the code on the TextGen page of the Langchain docs to stop working.
e.g.: the way th…
-
I'm seeing an example of 29s of audio rendered in ~3s, so about a 10:1 ratio on a 4090 here:
https://github.com/RandomInternetPreson/text_generation_webui_xtt_Alts/tree/main#example
But on my 40…
-
### Describe the bug
when use ollama with model llama3:70b, the python code return with a "`" ,caused the code cannot be executed.
### Reproduce
1. run interpreter by this command . interpreter --m…
-
We tried many local models like LLAMA, VICUNA, OPENASSIST, GPT4ALL in their 7b versions. None seem to give results like the CHATGPT API.
we would like to try to test new models, which can be loade…