Open ThereforeGames opened 8 months ago
I have the same problem: E:\SUPIR\venv\lib\site-packages\torch\nn\functional.py:5476: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal) How to fix it? thanks
I have the same problem: E:\SUPIR\venv\lib\site-packages\torch\nn\functional.py:5476: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.) attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal) How to fix it? thanks
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118 pip install -U xformers==0.0.22.post4
Hello,
Thank you for sharing SUPIR with us! I am trying to run it on Windows using a GeForce 3090, but I receive the following warning during inference:
Looking at my system resources, VRAM is still at 100%, so maybe I just need to be more patient. That said, has anyone else run into this warning or know if there's a simple fix?
I have
--loading_half_params --use_tile_vae
flags enabled.Thank you.
EDIT: Can confirm that the upscale does work despite the warning. However, even with
--use_8bit_llava
it takes nearly 15 minutes to scale to 1x resolution. VRAM usage is reportedly ~23.3GB which, while technically within the limits of a 3090, is probably offloading to CPU given that other apps are using the GPU as well. But the good news is--no-llava
lets me upscale a 512px image to 1024px in 40 seconds! Lowers VRAM requirements to 10.3 GB.