-
As the title says, I'm unable to install the latest version through `pip`
```
pip install flash-attn --no-build-isolation
Collecting flash-attn
Using cached flash_attn-2.6.3.tar.gz (2.6 MB)
P…
-
### System Info
python 3.10
rocky linux 9
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially support…
-
### Describe the bug
![image](https://github.com/user-attachments/assets/c99d8b94-7271-475f-a32d-3c9c8ec40bbc)
![image](https://github.com/user-attachments/assets/e188dcdf-6f00-417e-adbc-1b4d74f7c73…
-
I mainly use exl2 models since they allow better performance for local models, and the Text-Generation-WebUI is the best option for running them locally.
-
[oobabooga/text-generation-webui](/oobabooga/text-generation-webui), when run with the `--api` flag, publishes a locally available OpenAI API at http://127.0.0.1:5000/v1/. But trying to change the end…
-
### Question
I downloaded llava-llama-2-13b from:
https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview
Then I've quantized the model to 4-bit using .
```
git clone htt…
-
Hello !
Forge with flux does'nt work this morning.
but yesterday: yes.
Neveroom seems not working (???)
Thanks !
On My HDD (D:), i have 110go free .....
```
Python 3.10.11 (tags/v3.10…
-
### Describe the bug
llamacpp doesn't see radeon rx6900xt, previous version worked fine, it seems it has missing dependencies (rocm 5.7.1 is installed)
in particular llama_cpp_cuda can not be import…
pl752 updated
3 weeks ago
-
### Describe the bug
Llama cpp fails to install, and cascades through the entire thing, bricking the entire installation, making you have to reinstall it all. Even attempting a manual download of the…
-
not sure if this is supposed to work on forge in the first place. but when trying magic prompt i get this error:
(i got more than enough spare VRAM)
```
WARNING:dynamicprompts.generators.magicpro…