TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.54k stars 1.31k forks source link

エラー表示!NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(2, 4096, 8, 40) #2627

Open mikikokato opened 1 year ago

mikikokato commented 1 year ago

どなたかわかりますか?

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see python -m xformers.info for more info flshattF@0.0.0 is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) operator wasn't built - see python -m xformers.info for more info tritonflashattF is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) operator wasn't built - see python -m xformers.info for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for now cutlassF is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 40

mikikokato commented 1 year ago

Colabでエラー表示になります。 どのような問題かわかりますか?

TheLastBen commented 1 year ago

Are you using the latest notebook ?

kashyapjha commented 1 year ago

I get the same with the latest notebook. Only works with attention optimization disabled and then runs out of memory. It was generating previews with sdp and other attention optimizations but then again fails towards the end with the same error.

TheLastBen commented 1 year ago

using --xformers argument works without an issue https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb

Terenzio-Avantaggiato commented 1 year ago

Thanks for the reply!

I suppose I need to edit webui-user.bat file like below: set COMMANDLINE_ARGS= --xformers

Is correct?


Avantaggiato Terenzio [TRAGIX]

Il giorno lun 23 ott 2023 alle ore 16:17 Ben @.***> ha scritto:

using --xformers argument works without an issue

— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/2627#issuecomment-1775307832, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOGJDPM2FFOJX6FQRFALT3YAZ4AFAVCNFSM6AAAAAA6LQOVRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZVGMYDOOBTGI . You are receiving this because you commented.Message ID: @.***>

kashyapjha commented 1 year ago

using --xformers argument works without an issue https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb

I get this now:

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
mikikokato commented 1 year ago

using --xformers argument works without an issue https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb

When was the update made? The day before yesterday, there was an update of Google colab, and on top of connect, there was a !pip install lmdb !pip install torch==2.0.1+cu118 torchvision=0.15.2+cu118 torchaudio==2.0.2 torchtext==0.15.2+cpu torchdata==0.6.1 --index-url https:// download.pytorch.org/whl/cu118 I put this link in and it worked, but it has not worked since yesterday.

mikikokato commented 1 year ago

I am using. Today I am still unable to generate with Google Colab. Why?

mikikokato commented 1 year ago

Error indication today, not working

Belongs to a different loop than the loop specified by the loop argument.