Closed DaveScream closed 2 months ago
Hey, I wrote that wiki page. Try this wheel I built yesterday.
because you are on torch 1.13.1+cu116 you can use this one https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/tag/torch13
Not 100% sure on what broke dreambooth on later versions, but even flat 0.0.14 doesn't work, i built a good amount of version even the ones suggested on another post, only 0.14dev0 works for training
@Zuxier the issue is only with BW pass on later versions? https://github.com/facebookresearch/xformers/issues/631
@Zuxier the issue is only with BW pass on later versions? #631
On later versions, image generation works fine, but training does not. So i would assume it to be an issue related to BW pass, not sure if there is more behind it. I'm available to test out stuffs.
0.0.17dev476 was also working but they removed that one too
Is there a way to build 0.0.14dev0 nowadays? or 0.0.17dev476?
Edit: NVM, managed to do it. Finally can train again.
According to this tutorial https://github.com/d8ahazard/sd_dreambooth_extension/wiki/Extremely-Experimental-Libs
sd_extension_dreambooth now supports only xformers 0.0.14.dev0 but if I install this version from wheel package https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
But with this whl package, while loading web-ui I get error: Need to compile C++ extensions to get sparse attention suport. Please run python setup.py build develop
and when Im trying txt2img I get this error: Traceback (most recent call last): File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, kwargs)) File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img processed = process_images(p) File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\processing.py", line 480, in process_images res = process_images_inner(p) File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\processing.py", line 609, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\processing.py", line 801, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers.py", line 544, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers.py", line 447, in launch_sampling return func() File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers.py", line 544, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func( args, kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, extra_args)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\sd_samplers.py", line 337, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(args, kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, cond)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 324, in forward
x = block(x, context=context[i])
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\sd_hijack_checkpoint.py", line 4, in BasicTransformerBlock_forward
return checkpoint(self._forward, x, context)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, args)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 107, in forward
outputs = run_function(args)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 262, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(input, **kwargs)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 293, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops.py", line 862, in memory_efficient_attention
return op.forward_no_grad(
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops.py", line 305, in forward_no_grad
return cls.FORWARD_OPERATOR(
File "C:\Software\Stable_Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops.py", line 46, in no_such_operator
raise RuntimeError(
RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with
python setup.py develop
?So I think that I need to rebuild xformers. But if I rebuild Im getting unsupported version 0.0.16.
The question is how to build 0.0.14.dev0? Where to get sources for that version?
Windows 22H2
My Current venv pip freeze: absl-py==1.3.0 accelerate==0.12.0 addict==2.4.0 aenum==3.1.11 aiohttp==3.8.3 aiosignal==1.3.1 albumentations==1.3.0 altair==4.2.0 antlr4-python3-runtime==4.9.3 anyio==3.6.2 asttokens==2.2.1 astunparse==1.6.3 async-timeout==4.0.2 attrs==22.1.0 backcall==0.2.0 basicsr==1.4.2 bcrypt==4.0.1 beautifulsoup4==4.11.1 bitsandbytes==0.35.0 blendmodes==2022 blis==0.7.9 boltons==21.0.0 cachetools==5.2.0 catalogue==2.0.8 certifi==2022.12.7 cffi==1.15.1 chardet==4.0.0 charset-normalizer==2.1.1 clean-fid==0.1.29 click==7.1.2 clip @ git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 cmake==3.25.0 colorama==0.4.6 coloredlogs==15.0.1 confection==0.0.3 contourpy==1.0.6 cryptography==38.0.4 cycler==0.11.0 cymem==2.0.7 decorator==4.4.2 deprecation==2.1.0 diffusers==0.10.2 discord-webhook==1.0.0 dynamicprompts==0.3.0 einops==0.4.1 en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.4.1/en_core_web_sm-3.4.1.tar.gz entrypoints==0.4 executing==1.2.0 facexlib==0.2.5 fairscale==0.4.9 fastapi==0.87.0 ffmpy==0.3.0 filelock==3.8.2 filterpy==1.4.5 flatbuffers==22.12.6 font-roboto==0.0.1 fonts==0.0.3 fonttools==4.38.0 frozenlist==1.3.3 fsspec==2022.11.0 ftfy==6.1.1 future==0.18.2 gast==0.4.0 gdown==4.6.0 gfpgan==1.3.8 gitdb==4.0.10 GitPython==3.1.27 google-auth==2.15.0 google-auth-oauthlib==0.4.6 google-pasta==0.2.0 gradio==3.15.0 grpcio==1.51.1 h11==0.12.0 h5py==3.7.0 httpcore==0.15.0 httpx==0.23.1 huggingface-hub==0.11.1 humanfriendly==10.0 idna==2.10 imageio==2.22.4 imageio-ffmpeg==0.4.7 importlib-metadata==5.1.0 inflection==0.5.1 install==1.3.5 invisible-watermark==0.1.5 ipython==8.6.0 jedi==0.18.2 Jinja2==3.1.2 joblib==1.2.0 jsonmerge==1.8.0 jsonschema==4.17.3 keras==2.11.0 kiwisolver==1.4.4 kornia==0.6.7 langcodes==3.3.0 lark==1.1.2 libclang==14.0.6 linkify-it-py==1.0.3 lit==15.0.6 llvmlite==0.39.1 lmdb==1.4.0 lpips==0.1.4 Markdown==3.4.1 markdown-it-py==2.1.0 MarkupSafe==2.1.1 matplotlib==3.6.2 matplotlib-inline==0.1.6 mdit-py-plugins==0.3.3 mdurl==0.1.2 modelcards==0.1.6 moviepy==1.0.3 mpmath==1.2.1 multidict==6.0.3 murmurhash==1.0.9 mypy-extensions==0.4.3 networkx==3.0 ninja==1.11.1 numba==0.56.4 numpy==1.23.3 oauthlib==3.2.2 omegaconf==2.2.3 onnx==1.13.0 onnxruntime==1.13.1 open-clip-torch==2.6.1 opencv-python==4.6.0.66 opencv-python-headless==4.6.0.66 opt-einsum==3.3.0 orjson==3.8.3 packaging==22.0 pandas==1.5.2 paramiko==2.12.0 parso==0.8.3 pathy==0.10.1 pickleshare==0.7.5 piexif==1.1.3 Pillow==9.4.0 pip-chill==1.0.1 preshed==3.0.8 proglog==0.1.10 prompt-toolkit==3.0.36 protobuf==3.19.6 psutil==5.9.4 pure-eval==0.2.2 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.21 pycryptodome==3.16.0 pydantic==1.8.2 pyDeprecate==0.3.2 pydub==0.25.1 Pygments==2.13.0 PyNaCl==1.5.0 pyparsing==3.0.9 pyre-extensions==0.0.23 pyreadline3==3.4.1 pyrsistent==0.19.2 PySocks==1.7.1 python-dateutil==2.8.2 python-multipart==0.0.4 pytorch-lightning==1.7.6 pytz==2022.6 PyWavelets==1.4.1 PyYAML==6.0 qudida==0.0.4 realesrgan==0.3.0 regex==2022.10.31 requests==2.25.1 requests-oauthlib==1.3.1 resize-right==0.0.2 rfc3986==1.5.0 rsa==4.9 safetensors==0.2.7 scikit-image==0.19.2 scikit-learn==1.2.0 scipy==1.9.3 seaborn==0.12.1 Send2Trash==1.8.0 sentencepiece==0.1.97 six==1.16.0 smart-open==6.3.0 smmap==5.0.0 sniffio==1.3.0 soupsieve==2.3.2.post1 spacy==3.4.4 spacy-legacy==3.0.10 spacy-loggers==1.0.4 srsly==2.4.5 stack-data==0.6.2 starlette==0.21.0 sympy==1.11.1 tabulate==0.9.0 tb-nightly==2.12.0a20221208 tensorboard==2.11.0 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.1 tensorflow==2.11.0 tensorflow-estimator==2.11.0 tensorflow-intel==2.11.0 tensorflow-io-gcs-filesystem==0.28.0 termcolor==2.1.1 thinc==8.1.5 threadpoolctl==3.1.0 tifffile==2022.10.10 timm==0.6.7 tokenizers==0.12.1 toolz==0.12.0 torch==1.13.1+cu116 torchaudio==0.13.1+cu116 torchdiffeq==0.2.3 torchmetrics==0.11.0 torchsde==0.2.5 torchvision==0.14.1+cu116 tqdm==4.64.1 traitlets==5.7.1 trampoline==0.1.2 transformers==4.19.2 typer==0.3.2 typing-inspect==0.8.0 typing_extensions==4.4.0 uc-micro-py==1.0.1 urllib3==1.26.14 uvicorn==0.20.0 wasabi==0.10.1 wcwidth==0.2.5 websockets==10.4 Werkzeug==2.2.2 wrapt==1.14.1 xformers @ https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl yapf==0.32.0 yarl==1.8.2 zipp==3.11.0
installed NVIDIA CUDA versions
My CUDA system env variables