Open lolxdmainkaisemaanlu opened 1 year ago
@TheLastBen same error
same
Same error. I get this when I'm using img2img.
I added the following code snippet right before the last step but it didn't fix the problem: !python /content/gdrive/MyDrive/sd/stablediffusion/setup.py develop
same error
Same here
Dosent matter if I use Dreambooth or Fast SD, same result. It shows a warning when running COLAB, and then when I Gradio it can not process a request
Same error
Training the text encoder...
/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:429: UserWarning: Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop
?
warnings.warn(
Same here
Only works on premium gpu for which xformers isn't used I think.
Only works on premium gpu for which xformers isn't used I think.
so we won't be able to use sd on free collab anymore?
I noticed this early in the start process:
/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
I tried SD1.5 to see if that fixes it but same problem when using img2img
Same here, starting today
It doesn't work on a premium GPU either, so that's a red herring.
same here. on both colab files. Dreambooth and Automatic1111
After using
!python setup.py build develop
I still get the same error below.
RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop
?
Having this issue too, I got flagged as posting a duplicate.
same issue
Converting to Diffusers ...
/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:429: UserWarning: Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with `python setup.py develop`?
warnings.warn(
Error on Model Download cell of fast dreambooth
same issue
Similar message for me. It seemed to occur every time I saved a checkpoint
/usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
WARNING:xformers:WARNING: /usr/local/lib/python3.8/dist-packages/xformers/_C.so: undefined symbol: _ZNK3c104impl13OperatorEntry20reportSignatureErrorENS0_12CppSignatureE
Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py:429: UserWarning: Could not enable memory efficient attention. Make sure xformers is installed correctly and a GPU is available: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop?
warnings.warn(
Done, resuming training ...```
Seems to be fixed: 3cf2052
Just tried running it a few minutes ago and I got the same error.
fixed (for the T4 at least), re-run the requirements cell
https://github.com/TheLastBen/fast-stable-diffusion/issues/904#issuecomment-1341612026
Retrying now! Thanks for quick response!
fixed (for the T4 at least), re-run the requirements cell
what about paid pro users??? can we access too?
This is fixed, @lolxdmainkaisemaanlu can we close?
what about paid pro users??? can we access too?
are you getting the error with the A100 ?
Works for me now
can't get it to work with the A100. Using normal works. @TheLastBen
I'm also still getting this issue on the A100. T4 works though.
what about paid pro users??? can we access too?
are you getting the error with the A100 ?
i was getting error on a free account, i have NO idea what model google gives me, but now i am paying for PRO, so i get e better one than free, supposedly
Same exact error and running on T4, been trying to get it to work ALL DAY.. HELP! Im a total noobie btw, just started this yesterday and am using the paid colab version. Anyway to see if I ran out?
I'm also still getting this issue on the A100. T4 works though.
How?? Do I have to reset somehow?
I'm also still getting this issue on the A100. T4 works though.
How?? Do I have to reset somehow?
Yeah just refresh by hitting the play buttons again. Or refresh the colab page. Disconnect and start again.
On Colab T4 works, A100 not.
On Colab T4 works, A100 not.
Still doesn't work for me:( is there any way to completely reset my profile?
im getting this error now too
ERROR!!!
The same for me, T4 works, A100 is not. How can it be fixed?
@TheLastBen please let us know in a POST somehow when you resolve this issue, seems many people have the error on different occasions, MINE is when trying to resuming training AFTER i have closed page and return I GET ERROR.... BUT RESUMING TRAINING in the same session, WILL WORK.... rare/////
I'm having this error too. Last message is "returned non-zero exit status 1"
I got also the same error, used GPU Standard and Premium, and also tried the TPU option just as part of the testing!. Same error.
you're getting the error now ? are you using the main repo or of fork of it ?
@TheLastBen i used the notebook inside this repo: https://github.com/Dpbm/dreambooth-tutorial.
right?
that's not my notebook and not my repo, this is the repo : https://github.com/TheLastBen/fast-stable-diffusion
click on the thumbnail in the readme portion to get to the latest colabs
I've put the xformers wheels compiled by facebookresearch here:
https://github.com/brian6091/xformers-wheels/releases
This works on Google Colab for Tesla T4 (free) and A100 (premium).
Drop this in whatever cell you're running the xformers install:
@brian6091 did you test it on both GPUs ?
Yes, no problems.
Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f res = list(func(*args, kwargs)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f res = func(*args, *kwargs) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img processed = process_images(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 430, in process_images res = process_images_inner(p) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 531, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 664, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 507, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 422, in launch_sampling return func() File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 507, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func( args, kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, extra_args)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, *kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 315, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(input, kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/src/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(args, kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, cond)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(input, kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, *kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
x = layer(x, context)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(input, kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 334, in forward
x = block(x, context=context[i])
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, *kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_checkpoint.py", line 4, in BasicTransformerBlock_forward
return checkpoint(self._forward, x, context)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, args)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py", line 107, in forward
outputs = run_function(args)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/attention.py", line 272, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(input, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/memory_efficient_attention.py", line 967, in memory_efficient_attention
return op.forward_no_grad(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/memory_efficient_attention.py", line 343, in forward_no_grad
return cls.FORWARD_OPERATOR(
File "/usr/local/lib/python3.8/dist-packages/xformers/ops/common.py", line 11, in no_such_operator
raise RuntimeError(
RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with
python setup.py develop
?