kohya-ss / sd-webui-additional-networks

GNU Affero General Public License v3.0
1.77k stars 296 forks source link

ignore because weight is 0: #183

Open ghost opened 1 year ago

ghost commented 1 year ago

hi! I'm reciving "ignore because weight is 0" while generating. This was working flawlessly as of the 5th of March, but after updateing automatic1111 today it stopped producing my usual output.

I originally submitted an issue on automatic1111's repo, but it was suggested I come here. It worked once after disableing and reenableing it, but after closing the web-ui, and returning to use it at a later time, it proceeded to continue ignoring the weights I set. I also deleted and reinstalled Additional Networks, but I'm still getting the same issue.

Even though I've set the weight for each of the LoRA I use, it either ignores them all or is selective on what it ignores, it has even assign a value I didn't set.

|Please help, this very frustrating.

kohya-ss commented 1 year ago

I updated the Automatic1111's Web UI to 1.2.1 and tested it, but unfortunately I could not reproduce the issue. I have tried stopping and restarting the Web UI, closing and opening the browser, etc.

Could you please give me more details on the steps to reproduce the issue? Also, please share if you see any error messages in the console.

ghost commented 1 year ago

hi! Thanks for the reply. Unfortunately, I don't know how to reproduce the issue, I didn't do nothing out of the ordinary other than update Automatic1111 (which is not unordinary as I use the git pull command in the batch file). It seems to be happening randomly because now its working again and I don't know why. I'll try again later today to see if the issue persists.

A few notes on the issue though... On a couple of occasions, the weight I set would be different in the console, for example, a weight set 0.5 was showing 0.95 in console, then other times only one LoRA would be set correctly but the others would be set to 0. Then other times, all were ignored because according to the console they were set to 0 even though they weren’t.

Again, I did everything as I normally do, my routine didn't change, I launched the web-ui, waited, then configured my settings and hit generate. The only things I did differently we're after the issues happened (which I mentioned above).

Thanks again, I'll keep you updated.

kohya-ss commented 1 year ago

Thank you for your reply! If you find a procedure to reproduce the issue, please let me know.

I have experienced a few times where the weights on the web UI and the weights actually applied were different. However, I have not been able to reproduce it. Perhaps it is a gradio issue, but I am not familiar with gradio and have no idea what the cause is.

I will keep a close eye on it.

ghost commented 1 year ago

No worries. I didn't find a procedure per se, but this is exactly what I did this morning...

OK, so I launched the web-ui, I cut and pasted my prompts, adjusted my settings (smapling steps and dimensions), enabled Additional Networks, selected my LoRA's and adjusted their weights one at a time by highlighting the number and entering a value, then I hit generate. This times it's ignoring the first LoRA but not the other two. So random.

I'm gonna close and relaunch the web-ui and report back. For now, here is a copy of the console log. No errors i can see.

Already up to date. venv "S:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] Version: v1.2.1 Commit hash: 89f9faa63388756314e8a1d96cf86bf5e0663045 Installing requirements

Launching Web UI with arguments: --xformers --precision full --no-half --no-half-vae [AddNet] Updating model hashes... 100%|████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 1671.17it/s] [AddNet] Updating model hashes... 100%|███████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 10030.62it/s] ControlNet v1.1.173 ControlNet v1.1.173 Loading weights [1e859984dc] from S:\stable-diffusion-webui\models\Stable-diffusion\creepyDiffusion_v20.safetensors Creating model from config: S:\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 23.6s (import torch: 2.1s, import gradio: 1.4s, import ldm: 1.3s, other imports: 4.2s, list SD models: 0.4s, setup codeformer: 0.3s, load scripts: 3.7s, create ui: 8.5s, gradio launch: 1.5s, scripts app_started_callback: 0.1s). Loading VAE weights specified in settings: S:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors Applying xformers cross attention optimization. Textual inversion embeddings loaded(0): Model loaded in 46.8s (load weights from disk: 1.5s, create model: 0.5s, apply weights to model: 34.9s, load VAE: 6.3s, move model to device: 2.3s, load textual inversion embeddings: 1.3s). ignore because weight is 0: bodyHorror_v10(5e2c897e1e90) LoRA weight_unet: 0.4, weight_tenc: 0.4, model: cursedImages_cursedImages(d53b420ad0ef) dimension: {128}, alpha: {128.0}, multiplier_unet: 0.4, multiplier_tenc: 0.4 create LoRA for Text Encoder: 72 modules. create LoRA for U-Net: 192 modules. original forward/weights is backed up. enable LoRA for text encoder enable LoRA for U-Net shapes for 0 weights are converted. LoRA model cursedImages_cursedImages(d53b420ad0ef) loaded: LoRA weight_unet: 0.5, weight_tenc: 0.5, model: realisticVaginasGodPussy_godpussy2Innie(da541bda205a) dimension: {128}, alpha: {128.0}, multiplier_unet: 0.5, multiplier_tenc: 0.5 create LoRA for Text Encoder: 72 modules. create LoRA for U-Net: 192 modules. enable LoRA for text encoder enable LoRA for U-Net shapes for 0 weights are converted. LoRA model realisticVaginasGodPussy_godpussy2Innie(da541bda205a) loaded: setting (or sd model) changed. new networks created. 47%|██████████████████████████████████████▎ | 28/60 [00:52<00:56, 1.77s/it] Total progress: 47%|██████████████████████████████▊ | 28/60 [00:47<00:56, 1.77s/it]

ghost commented 1 year ago

Ok, so relaunching web-ui following the same routine as I described above, I got this. All ignored again...

Already up to date. venv "S:\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] Version: v1.2.1 Commit hash: 89f9faa63388756314e8a1d96cf86bf5e0663045 Installing requirements

Launching Web UI with arguments: --xformers --precision full --no-half --no-half-vae [AddNet] Updating model hashes... 100%|███████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 20049.25it/s] [AddNet] Updating model hashes... 100%|███████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 20039.68it/s] ControlNet v1.1.173 ControlNet v1.1.173 Loading weights [1e859984dc] from S:\stable-diffusion-webui\models\Stable-diffusion\creepyDiffusion_v20.safetensors Creating model from config: S:\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Loading VAE weights specified in settings: S:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors Applying xformers cross attention optimization. Textual inversion embeddings loaded(0): Model loaded in 7.3s (load weights from disk: 0.4s, create model: 0.5s, apply weights to model: 1.1s, load VAE: 1.8s, move model to device: 2.2s, load textual inversion embeddings: 1.2s). Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 15.1s (import torch: 2.0s, import gradio: 1.2s, import ldm: 0.6s, other imports: 1.0s, setup codeformer: 0.2s, load scripts: 2.0s, create ui: 7.7s, gradio launch: 0.3s). ignore because weight is 0: bodyHorror_v10(5e2c897e1e90) ignore because weight is 0: cursedImages_cursedImages(d53b420ad0ef) ignore because weight is 0: realisticVaginasGodPussy_godpussy2Innie(da541bda205a) 27%|█████████████████████▊ | 16/60 [00:32<00:59, 1.34s/it] Total progress: 27%|█████████████████▌ | 16/60 [00:19<00:58, 1.32s/it]

kohya-ss commented 1 year ago

Unfortunately I am still unable to reproduce the issue. However, I have noticed that when the weights slider is moved fast, the weights are not reflected correctly. This seems to be due to change event of gradio's Slider not being called correctly.

I am not sure if the issue is caused by the same thing, but there is definitely something wrong with gradio. I will look into the possibility of changing it so that change event is not used.

rushuna86 commented 1 year ago

@kohya-ss also quickly changing the value so it increases by the default increment of 0.05 quickly leads to it also breaking. So example of 0.8 and i quickly tap for it to increase upwards twice to 0.9 it will only register 0.85 change. I've encountered this issue with 0 weight before but not with all lora's it's usually the first one on the list. going back into it and just using the incremental increase of 0.05 fixes this for me, feels like the value are inputted too quickly and the change isn't being registered. never happened previously so it could be due to gradio bumps, never happened in 3.23.1, started happening since 3.26

kohya-ss commented 1 year ago

The latest commit changes the value in release event as well as change event. I think this bug may have been fixed. I would be grateful if you could test it.