-
Greetings!
I've got very slow generation times using tortoise and I get the following error constantly with nearly every tool in the whole UI:
D:\AI-Apps\tts-generation-webui-main\installer_file…
-
I installed Comfy UI, open it, load default Workflow, load a XL Model, then Start, then this warning appears.
It reduces my generation speed by tenfold.
got prompt
model_type EPS
adm 2816
Using…
-
### Describe the bug
When i load my model and try to user it get a error
13:41:11-717356 INFO Saved "I:\programming\text-generation-webui\presets\My Preset.yaml".
I:\programming\text-generat…
-
Loading pipeline components...: 100%|██████████| 7/7 [00:02
-
It looks like Flash Attention was removed from the Python portion in release 4.4.0...I had a few questions:
1) Can you confirm that flash attention is still available in Release 4.3.1? No benchma…
-
### Is your feature request related to a problem? Please describe.
The current implementation causes issues when loading old model checkpoints during inference as it is not clear whether flash attent…
-
I'd love to see a clean example of a transformer that integrates flex attention. I haven't found any samples that do this.
For reference, I have a transformer-based model that uses the TransformerE…
-
Thank you for taking the time to review my question.
Before I proceed, I would like to mention that I am a beginner, and I would appreciate your consideration of this fact.
I am seeking assistan…
-
### Describe the bug
Duplicating #666 since that issue was never reopened and the bug is still present
On version 0.5.8, SideStore is still showing an update prompt for old versions of SideStore.
…
-
I used [sd-perturbed-attention](https://github.com/pamparamm/sd-perturbed-attention) and had clear, mostly positive results with it. Integrated Perturbed-Attention Guidance was presented with limited …