NUROISEA / anime-webui-colab

webui on colab for weebs lol
111 stars 9 forks source link

Animagine-XL does not work upon relaunching #43

Open NUROISEA opened 5 months ago

NUROISEA commented 5 months ago

WebUI version and extensions version both latest.

Logs: ``` Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Version: v1.9.0 Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/stable-diffusion-webui/models/Stable-diffusion/animagine-xl-3.1.safetensors --share 2024-04-15 08:07:10.044843: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-04-15 08:07:10.044920: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-04-15 08:07:10.050320: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered no module 'xformers'. Processing without... no module 'xformers'. Processing without... No module 'xformers'. Proceeding without it. Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/animagine-xl-3.1.safetensors /content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:414: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead. arc_calc_height = gr.Button(value="Calculate Height").style( /content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:414: GradioDeprecationWarning: Use `scale` in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not. arc_calc_height = gr.Button(value="Calculate Height").style( /content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:422: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead. arc_calc_width = gr.Button(value="Calculate Width").style( /content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:422: GradioDeprecationWarning: Use `scale` in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not. arc_calc_width = gr.Button(value="Calculate Width").style( /content/stable-diffusion-webui/extensions/latent-couple-two-shot/scripts/two_shot.py:130: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead. visual_regions = gr.Gallery(label="Regions").style(grid=(4, 4, 4, 8), height="auto") /content/stable-diffusion-webui/extensions/latent-couple-two-shot/scripts/two_shot.py:130: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead. visual_regions = gr.Gallery(label="Regions").style(grid=(4, 4, 4, 8), height="auto") Running on local URL: http://127.0.0.1:7860/ Running on public URL: https://bc2a44aac1d64916d0.gradio.live/ This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) Startup time: 31.4s (prepare environment: 6.0s, import torch: 8.1s, import gradio: 2.2s, setup paths: 7.4s, initialize shared: 0.3s, other imports: 1.1s, load scripts: 1.6s, create ui: 2.8s, gradio launch: 1.9s). Creating model from config: /content/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml creating model quickly: NotImplementedError Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/content/stable-diffusion-webui/modules/initialize.py", line 149, in load_model shared.sd_model # noqa: B018 File "/content/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model return modules.sd_models.model_data.get_sd_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model load_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model sd_model = instantiate_from_config(sd_config.model) File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda() File "/usr/local/lib/python3.10/dist-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 73, in cuda return super().cuda(device=device) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in return self._apply(lambda t: t.cuda(device)) NotImplementedError: Cannot copy out of meta tensor; no data! Failed to create model quickly; will retry using slow method. loading stable diffusion model: NotImplementedError Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/content/stable-diffusion-webui/modules/initialize.py", line 149, in load_model shared.sd_model # noqa: B018 File "/content/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model return modules.sd_models.model_data.get_sd_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model load_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 732, in load_model sd_model = instantiate_from_config(sd_config.model) File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda() File "/usr/local/lib/python3.10/dist-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 73, in cuda return super().cuda(device=device) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in return self._apply(lambda t: t.cuda(device)) NotImplementedError: Cannot copy out of meta tensor; no data! Stable diffusion model failed to load Applying attention optimization: sdp... done. Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/animagine-xl-3.1.safetensors Creating model from config: /content/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml creating model quickly: NotImplementedError Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap self._bootstrap_inner() File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper response = f(*args, **kwargs) File "/content/stable-diffusion-webui/modules/ui.py", line 1154, in update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit") File "/content/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model return modules.sd_models.model_data.get_sd_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model load_model() File "/content/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model sd_model = instantiate_from_config(sd_config.model) File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda() File "/usr/local/lib/python3.10/dist-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 73, in cuda return super().cuda(device=device) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda return self._apply(lambda t: t.cuda(device)) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in return self._apply(lambda t: t.cuda(device)) NotImplementedError: Cannot copy out of meta tensor; no data! ```

Main takeaway is the NotImplementedError: Cannot copy out of meta tensor; no data!

NUROISEA commented 5 months ago

ui-redesign is also affected by this problem, webui running at 1.8.0-RC w/ lite extensions

NUROISEA commented 5 months ago

Testing down the versions, as listed on the tag list (also only testing non-RC versions)

All of the following are ran with lite extensions.

Version Can relaunch? Colab code modifications
v1.7.0 None
v1.7.0 ✔️ Commented out the entire utility.patch_list() block
v1.8.0 None
v1.8.0 ✔️ Commented out the entire utility.patch_list() block
v1.9.0 None
v1.9.0 ✔️ Commented out the entire utility.patch_list() block

Will probably test v1.6.0 and lower but I don't have the time for this.

Turns out that the problem lies inside of patch_list() function.

This code block of patches doesn't seem like the root cause of the problem, everything lies on the source file of the patches.

This section of the log, confirms my suspicion.

  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()

I'm not too sure why I have these "patches" but it seems like this is done due to old colab being unreliable with memory. Can be safe to remove?

Needs more testing.

NUROISEA commented 5 months ago

v1.5.0 - v1.6.0 also yields the same result

v1.4.0 straight up does not work

Ayaya70 commented 5 months ago

This is exactly the same problem as I had with PYOM. I utilized latest-dev last week with experimental extensions_version, and it worked. On the other hand, still don't know what about Animagine-XL as I'll only test it next week In fact, wait, does this mean that Animagine-XL is not working on any version?

NUROISEA commented 5 months ago

does this mean that Animagine-XL is not working on any version?

Only v1.4.0, which is Stable on the selector for the web UI.

If you want to use this notebook, use Latest and delete the following code in your notebook: image

But it still works even though you don't delete it, the problem resides when you stopped running then run again without killing your runtime. I just noticed this when doing experiments with the notebooks and I have to stop and run the notebooks several times. So far, this problem is only present on Animagine-XL

Ayaya70 commented 5 months ago

Currently testing on latest-dev and experimental, working fine as long as I don't restart the cell, batchlinks still my greatest rival and loras look kinda broken but Oh My God Man, I tested it today and, as a person who generate AI 10 hours per week since february 2023, this is BY FAR the best model/checkpoint I've ever worked with. I have absolutely no words, this is insane. I'm shocked Thank you Nuroisea for bringing it, I really mean it.

NUROISEA commented 5 months ago

WebUI 1.9.3, now all notebooks are affected :/

Ayaya70 commented 5 months ago

Tbh this model is so good I don't need to restart the cell anymore so it's fine lol As long as it's working I don't mind~