-
### Describe the bug
Hello, I'm trying to use TheBloke/WizardLM-33B-V1.0-Uncensored-GPTQ on ExLlama however oobabooga keeps force switching to GPTQ-for-LLaMa model loader. I've successfully used th…
-
After i have a created one db with some .PDF files ,when i enter an instruction i give the error in the title and this output:
…
-
I see the readme.md says "Variety of models (h2oGPT, WizardLM, Vicuna, OpenAssistant, etc.) supported". so How to deploy vicuna-13b with h2gpt ?
-
How to setup, if I have oobabooga installed on another machine?
-
[TheBloke](https://huggingface.co/TheBloke)'s (Tom Jobbins's) _Wizard-Vicuna-Uncensored_ models are performing very well for their size on the [Open LLM Leaderboard](https://huggingface.co/spaces/Hugg…
-
### Describe the bug
The whisper_stt extension runs into a error when trying to use the microphone inside the browser.
Tried Firefox and Ungoogled chromium. Error message seems to be similar to #11…
-
### Describe the bug
It's trying to chat with me, but can't get out a single word and then clears the screen and starts over. The first time it tries sometimes takes a while, but subsequent attempts …
-
### Describe the bug
It's one of models using new llama.cpp technique of 2/3-bit quantization, according to model site it needs one of newest builds of llama.cpp (06 June 2023 build = 622), I assume …
-
Discussed briefly in Discord:
**Issue:** When using the `--model` command with play.bat, the following error is thrown (both 4bit and non-4bit models):
![image](https://github.com/0cc4m/KoboldAI/a…
-
### System Info
I pulled the latest commits few hrs back and built the docker image locally. I tried to use the gptq models such as Bloke 33b with the new changes to TGI regarding gptq. I am able to …