-
Hey guys, can you help?
I just got this error after running `python app.py --device cuda:0`
```bash
Traceback (most recent call last):
File "/workspace/app.py", line 391, in
with gr.B…
-
Incredible project, i managed to run the model with good speed on my hardware (AMD) thanks.
I have a question do you have any plans to offload the weights and be able to run bigger models like 13B o…
-
This thread is dedicated to discussing the setup of the webui on AMD GPUs.
You are welcome to ask questions as well as share your experiences, tips, and insights to make the process easier for all…
-
Looks like this issue is better here, as its the ooba check that is failing:
https://github.com/KillianLucas/open-interpreter/issues/713
-
Hi, I tried to use oobabooga webui with GPTQ models. Since my GPU has only 12GB VRAM I would like to use the CPU only version, since my PC has 32GB of RAM.
But the model does not seem to be able …
-
### Environment
🪟 Windows
### System
Google Chrome 121.0.6167.161. Windows 10 Pro. i7-13700K. 4070ti+3090+3080ti.
### Version
1.12.0
### Desktop Information
Staging, oobabooga, Ll…
-
When i try this in colab
tokenizer = AutoTokenizer.from_pretrained("emilianJR/CyberRealistic_V3")
model = AutoModelForCausalLM.from_pretrained("emilianJR/CyberRealistic_V3")
HTTPError …
-
I have detailed this on a closed ticket here https://github.com/microsoft/DeepSpeed/issues/3342#issuecomment-1826447914 why the current instructions are unclear (along with photos showing what the Dee…
-
**Describe the bug**
I runing in text-generation-webui on cpu mode,it's normal but i click generate,the console print this error:
> device = [d for d in self.hf_device_map.values() if d not in ('cpu…
-
### Describe the feature
Allow the user to configure prompt templates or at minimum set prefix and suffix to user entered message.
VSCode with this extension comprises great UI for various model…