-
The llama.cpp loader accepts a path for gguf models, but I was not able to find this option for models loaded with hf transfomers. I have quite a few models that I use with https://github.com/oobaboog…
-
### Describe the bug
Hello,
I found this model which i thought would be good for this task since it's a mixtral finetuned with function calling but i get endless non sens generation until the contex…
-
### Describe the bug
When attempting to launch start_windows.bat (after the first time, for setup) the following error is thrown:
(D:\text-generation-webui\installer_files\env) D:\text-generation…
-
### Describe the bug
It seems it forces sampling the first token before the context has finished processing or something along those lines. Not sure if it applies to the regular llama.cpp backend o…
-
Hiya!
Hope you are keeping well! :)
I thought I would let you know, text-gen-webui has bumped its version of PyTorch to 2.2.x https://github.com/oobabooga/text-generation-webui/commit/164ff2440…
-
Please help. I've been getting gibberish responses with exllama 2_hf. I saw this post: https://github.com/oobabooga/text-generation-webui/pull/2912
But I'm a newbie, and I have no idea what half 2 …
-
### Describe the bug
DEPRECATION: omegaconf 2.0.6 has a non-standard dependency specifier PyYAML>=5.1.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer v…
-
### Describe the bug
I had many conversation with long replies turned on and set to 400 using these 2 models loyalmacaroni-7b and sonya-7b
As there content reached 400, the cpu went down, and th…
-
Hi. I am trying to use tavern.ai with the textgeneration webui. How ever there seams to be an issue with the new openai compatible api and tavern.ai. I already tried to copy over the api extension to …
-
I'd like to suggest adding built in optimization to image save (like 179) making it read :
image.save(buffered, format="JPEG", optimize=True)
https://github.com/oobabooga/text-generation-webui/bl…