-
Gradio clients that run local language models such as “OobaBooga” and allow api support should be a major consideration for the roadmap process. Creating usable model swapping with a cache functionali…
-
It seems like it doesn't like to work on Windows and is unable to detect my cuda installation.
```
(textgen) C:\Users\pasil\text-generation-webui>python server.py --cai-chat --load-in-8bit
Warning:…
-
### Describe the bug
As of today, no message is sent back by the AI. Settings are the default Colab/Gradio ones, I dont know how this computer beep-boop works.
### Is there an existing issue for thi…
-
I'm having an issue loading the extension
21:15:37-368368 ERROR Failed to load the extension "Playground".
Traceback (most recent call last):
File "C:\Users\USER1\pinokio\api\oobabooga.pinok…
-
Need to create a strategy/examples that demonstrate how the community can create separate repos for plugins and connectors so that they don't need to live in the core Semantic Kernel repo.
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to fi…
-
Unsure if this is an exllamav2 issue or a llama-cpp issue. (In contrast, GGUF Q8_0 conversion of BF16 worked.)
When I loaded it via ooba/llama-cpp, inference broke when context length exceeded 4K, al…
-
### Discussed in https://github.com/oobabooga/text-generation-webui/discussions/5150
Originally posted by **jbarker7** January 2, 2024
I've got Oobabooga up and running no problem and just fi…
-
Where is the LlaVA extensions for oobabooga?
-
### Describe the bug
just with cpu i'm only getting ~1 tokens/s.
(I haven't specified any arguments like possible core/threads, but wanted to first test base performance with gpu as well.)
I instal…