lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.06k stars 689 forks source link

[Bug]: Specific Lora Fails to Load #453

Open CircleD5 opened 6 months ago

CircleD5 commented 6 months ago

Checklist

What happened?

Encountered a bug where a specific Lora does not load when attempting to insert it through the Lora tab in the WebUI Forge. Interestingly, it works with Automatic1111's WebUI.

Error Message

Traceback (most recent call last): File "D:\SD\forge\stable-diffusion-webui-forge\extensions-builtin\Lora\networks.py", line 68, in load_networks net = load_network(name, network_on_disk) File "D:\SD\forge\stable-diffusion-webui-forge\extensions-builtin\Lora\networks.py", line 31, in load_network net.mtime = os.path.getmtime(network_on_disk.filename) AttributeError: 'NoneType' object has no attribute 'filename'

Steps to reproduce the problem

The problematic Lora can be found at the following URL. Please note, this is an NSFW Lora, so proceed with caution. https://civitai.com/models/319572?modelVersionId=358351

What should have happened?

When selecting the mentioned Lora in the Lora tab of the browser-based WebUI, it should correctly insert into the prompt and be utilized for generation.

What browsers do you use to access the UI ?

Microsoft Edge

Console logs

venv "D:\SD\forge\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.16v1.8.0rc-latest-268-gb59deaa3
Commit hash: b59deaa382bf5c968419eff4559f7d06fc0e76e7
Total VRAM 23028 MB, total RAM 65382 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
Hint: your device supports --pin-shared-memory for potential speed improvements.
Hint: your device supports --cuda-malloc for potential speed improvements.
Hint: your device supports --cuda-stream for potential speed improvements.
VAE dtype: torch.bfloat16
Launching Web UI with arguments: --civsfz-api-key 3b2e5ea75e70de5cd9d3a6a7d0370fc2
Total VRAM 23028 MB, total RAM 65382 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
Hint: your device supports --pin-shared-memory for potential speed improvements.
Hint: your device supports --cuda-malloc for potential speed improvements.
Hint: your device supports --cuda-stream for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated:  False
Using pytorch cross attention
ControlNet preprocessor location: D:\SD\forge\stable-diffusion-webui-forge\models\ControlNetPreprocessor
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.1.2, num models: 13
Loading weights [67ab2fd8ec] from D:\SD\forge\stable-diffusion-webui-forge\models\Stable-diffusion\_SDXL_1_0\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
model_type EPS
UNet ADM Dimension 2816
2024-03-01 13:24:36,334 - ControlNet - INFO - ControlNet UI callback registered.
CivBrowser: Set types
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
CivBrowser: Set base models
CivBrowser: Set sorts
CivBrowser: Set periods
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.3s (prepare environment: 5.3s, import torch: 2.3s, import gradio: 0.6s, setup paths: 0.6s, other imports: 0.4s, list SD models: 0.2s, load scripts: 3.0s, create ui: 2.6s, gradio launch: 0.2s).
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Loading VAE weights specified in settings: D:\SD\forge\stable-diffusion-webui-forge\models\VAE\sdxl-fixed.vae.safetensors
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  21473.8427734375
[Memory Management] Model Memory (MB) =  2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  18305.488075256348
Moving model(s) has taken 0.56 seconds
Model loaded in 5.3s (load weights from disk: 0.2s, forge load real models: 3.6s, forge finalize: 0.4s, load VAE: 0.1s, calculate empty prompt: 0.9s).
Loading VAE weights specified in settings: D:\SD\forge\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
VAE weights loaded.
activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x000001ADA9646FE0>, <modules.extra_networks.ExtraNetworkParams object at 0x000001ADA6CE78E0>]: AttributeError
Traceback (most recent call last):
  File "D:\SD\forge\stable-diffusion-webui-forge\extensions-builtin\Lora\networks.py", line 68, in load_networks
    net = load_network(name, network_on_disk)
  File "D:\SD\forge\stable-diffusion-webui-forge\extensions-builtin\Lora\networks.py", line 31, in load_network
    net.mtime = os.path.getmtime(network_on_disk.filename)
AttributeError: 'NoneType' object has no attribute 'filename'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\SD\forge\stable-diffusion-webui-forge\modules\extra_networks.py", line 135, in activate
    extra_network.activate(p, extra_network_args)
  File "D:\SD\forge\stable-diffusion-webui-forge\extensions-builtin\Lora\extra_networks_lora.py", line 43, in activate
    networks.load_networks(names, te_multipliers, unet_multipliers, dyn_dims)
  File "D:\SD\forge\stable-diffusion-webui-forge\extensions-builtin\Lora\networks.py", line 70, in load_networks
    errors.display(e, f"loading network {network_on_disk.filename}")
AttributeError: 'NoneType' object has no attribute 'filename'

To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  19625.517578125
[Memory Management] Model Memory (MB) =  4897.086494445801
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  13704.4310836792
Moving model(s) has taken 0.75 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 28/28 [00:04<00:00,  6.53it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████████| 28/28 [00:03<00:00,  7.04it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  14606.69140625
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  13423.134325027466
Moving model(s) has taken 0.26 seconds
Loading VAE weights specified in settings: D:\SD\forge\stable-diffusion-webui-forge\models\VAE\sdxl-fixed.vae.safetensors
VAE weights loaded.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 28/28 [00:04<00:00,  5.62it/s]

Additional information

Upon further investigation, I discovered an issue highlighted in the load_networks function

When clicking on the relevant Lora in the Lora tab within the WebUI browser, is inserted. However, the correct name should be 'cumbaXLP_vertical', indicating that an incorrect value is being inserted.

Interestingly, this incorrect value 'cumbaXLP_vertical' is included as a Key in available_networks used within load_networks() but not in available_network_aliases.

Suggested Solution

We should correct the behavior so that when clicking on a specific Lora in the Lora tab within the WebUI Forge, the incorrect name (file name) is not inserted. Instead, inserting the exact value 'cumbaXLP_vertical' directly (bypassing the WebUI) results in the correct functionality.

CircleD5 commented 6 months ago

I have some concerns regarding the code found.

In networks.py, the function list_available_networks() registers the file name as the Key in available_network_aliases. On the other hand, it registers entry.alias as the Key in available_network_aliases, which essentially means using the model's ss_output_name as the Key.

This implies that Lora models that have been renamed (where the ss_output_name and file name do not match) cannot be loaded.

Shouldn't we register entry.alias as the Key in available_network_aliases instead of the file name to align with Automatic1111 Webui?

BrickMissle commented 6 months ago

I'm assuming changing the "refer to lora by" option from Filename to Alias from file is not a vialble workaround, nor is renaming the lora safetensor file itself? If I understand correctly, that is.

Much like your lora, it seems you found a part of the code that's a bit of a mess

CircleD5 commented 6 months ago

Thank you! I wasn't aware of the "refer to lora by" option. After changing this option from 'Alias from file' to 'Filename', it started working properly.

Therefore, my problem is solved by changing the "refer to lora by" option from 'Alias from file' to 'Filename'.

However, while in the Automatic1111 WebUI, it works correctly with 'Alias from file', in the WebUI Forge, it does not work with 'Alias from file'. I am unable to determine whether this is a feature (i.e., intentional behavior) or a bug. I would like to leave the decision of whether to close this issue or keep it open to the code administrators or a third party who understands this code.

aaronjolson commented 2 months ago

Just to add another anecdotal data point. I also ran into this issue, and switched to "refer to by lora by filename" under the "Extra Networks tab". Re-running my job, I received a similar but slightly different error, when I realized that I was also missing one of the LORA models I was attempting to reference (typo in name). A1111 would usually throw a "referenced lora 'abcd' not found, skipping " error in these situations. Remedied that issue and now everything is working great!