Open mikikokato opened 1 year ago
Colabでエラー表示になります。 どのような問題かわかりますか?
Are you using the latest notebook ?
I get the same with the latest notebook. Only works with attention optimization disabled and then runs out of memory. It was generating previews with sdp and other attention optimizations but then again fails towards the end with the same error.
using --xformers
argument works without an issue
https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb
Thanks for the reply!
I suppose I need to edit webui-user.bat file like below: set COMMANDLINE_ARGS= --xformers
Is correct?
Avantaggiato Terenzio [TRAGIX]
Il giorno lun 23 ott 2023 alle ore 16:17 Ben @.***> ha scritto:
using --xformers argument works without an issue
— Reply to this email directly, view it on GitHub https://github.com/TheLastBen/fast-stable-diffusion/issues/2627#issuecomment-1775307832, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOOGJDPM2FFOJX6FQRFALT3YAZ4AFAVCNFSM6AAAAAA6LQOVRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZVGMYDOOBTGI . You are receiving this because you commented.Message ID: @.***>
using
--xformers
argument works without an issue https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb
I get this now:
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
using
--xformers
argument works without an issue https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb
When was the update made? The day before yesterday, there was an update of Google colab, and on top of connect, there was a !pip install lmdb !pip install torch==2.0.1+cu118 torchvision=0.15.2+cu118 torchaudio==2.0.2 torchtext==0.15.2+cpu torchdata==0.6.1 --index-url https:// download.pytorch.org/whl/cu118 I put this link in and it worked, but it has not worked since yesterday.
I am using. Today I am still unable to generate with Google Colab. Why?
Error indication today, not working
Belongs to a different loop than the loop specified by the loop argument.
どなたかわかりますか?
NotImplementedError: No operator found for
memory_efficient_attention_forward
with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0decoderF
is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - seepython -m xformers.info
for more infoflshattF@0.0.0
is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) operator wasn't built - seepython -m xformers.info
for more infotritonflashattF
is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old) operator wasn't built - seepython -m xformers.info
for more info triton is not available requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 Only work on pre-MLIR triton for nowcutlassF
is not supported because: xFormers wasn't build with CUDA support operator wasn't built - seepython -m xformers.info
for more infosmallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - seepython -m xformers.info
for more info unsupported embed per head: 40