-
I used lm-format-enforcer with with llama-cpp-python example and adapted it a bit to get it to work, but I was limited to just what llama-cpp-python can handle. I have switched to using the Oobabooga …
-
Hi there!
Would it be possible to use a customized endpoint? I have my own local LLM AI with an OpenAI standard API endpoint with "oobabooga
text-generation-webui"
Best regards,
-
"Traceback (most recent call last):
File "G:\-AI\oobabooga_windows\text-generation-webui\server.py", line 1186, in
create_interface()
File "G:\-AI\oobabooga_windows\text-generation-webui\s…
-
Hello, thank you for your hard work on a very interesting extension. I'm trying it out but am having issues enabling the extension once everything is installed. I have voicevox_engine installed, I c…
-
Hi, I don't have much programming experience, but I'm starting to get the hang of A1111. I'm very interested in IF prompt MKR and I've installed it in my A1111. However, I can't get it to work with Oo…
-
Hey!
Thanks for the UI, I have used it but the format is not as nice as some of the streaming chats. Wondering if someone has tried to integrate this with the oobabooga text ui with our own trained…
-
I'm not sure if I'm placing the URL for the local LLM API correctly, but I have something like this and I can't get the bot to work. Can you give me a solution so that it works with Oobabooga, thanks.…
-
I have been informed that while Flash Attention's there it's not being used -
https://github.com/oobabooga/text-generation-webui/issues/3759#issuecomment-2031180332
The post has a link to what has …
-
```
Traceback (most recent call last):
File "C:\MyShit\AI\oobabooga_windows\text-generation-webui\installer_files\env\Lib\site-packages\gradio\queueing.py", line 407, in call_prediction
outpu…
-
tested with the default address and port
http://127.0.0.1:11434/
llama3.1
llama3.1:latest
mimicking the oobabooga setup did not result in a connection either. https://docs.sillytavern.app…