AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
142.91k stars 26.94k forks source link

[Bug]: Previous model affect current model image generation even with same seed #14200

Open miguel234457 opened 11 months ago

miguel234457 commented 11 months ago

Is there an existing issue for this?

What happened?

This problem is generated from switching models to generate same image using same lora. i tried generating the same image using same seed, lora and model. After switching between old and new model and i get a significantly different result, as if effects of previous model are bleeding into image of new model, which should not happen as models are separate from each other

Steps to reproduce the problem

  1. generate image with lora using new model
  2. switch to bad model
  3. generate image with lora using bad model
  4. switch back to new model
  5. generate image with lora using new model with same seed
  6. get significantly different generation with same model and seed

What should have happened?

Models are separate from each other and should not affect image generations of another model when switching between models

Sysinfo

{ "Platform": "Windows-10-10.0.19045-SP0", "Python": "3.10.8", "Version": "v1.7.0-RC-2-g883d6a2b", "Commit": "883d6a2b34a2817304d23c2481a6f9fc56687a53", "Script path": "D:\sd\stable-diffusion-webui", "Data path": "D:\sd\stable-diffusion-webui", "Extensions dir": "D:\sd\stable-diffusion-webui\extensions", "Checksum": "d30082a1b5d297d57317fae3463502cc4fd72cb555d82e6b244c6cc999d70e10", "Commandline": [ "launch.py", "--xformers", "--opt-split-attention", "--no-half-vae", "--upcast-sampling", "--no-gradio-queue" ], "Torch env info": { "torch_version": "1.13.1+cu117", "is_debug_build": "False", "cuda_compiled_version": "11.7", "gcc_version": null, "clang_version": null, "cmake_version": null, "os": "Microsoft Windows 10 家用版", "libc_version": "N/A", "python_version": "3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)] (64-bit runtime)", "python_platform": "Windows-10-10.0.19045-SP0", "is_cuda_available": "True", "cuda_runtime_version": null, "cuda_module_loading": "LAZY", "nvidia_driver_version": "536.23", "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3080 Ti", "cudnn_version": null, "pip_version": "pip3", "pip_packages": [ "numpy==1.23.5", "open-clip-torch==2.20.0", "pytorch-lightning==1.9.4", "torch==1.13.1+cu117", "torchdiffeq==0.2.3", "torchmetrics==0.11.4", "torchsde==0.2.6", "torchvision==0.14.1+cu117" ], "conda_packages": null, "hip_compiled_version": "N/A", "hip_runtime_version": "N/A", "miopen_runtime_version": "N/A", "caching_allocator_config": "garbage_collection_threshold:0.9,max_split_size_mb:512", "is_xnnpack_available": "True"

What browsers do you use to access the UI ?

Microsoft Edge

Console logs

No logs, everything runs normally

Additional information

I have already used fixes from silmilar issues like #13917 and #13178 switched to dev branch and the problem still persists, i also tried to delete the problematic model and lora and redownloaded the lora again and it seems to still effect the generations of the new model.

No response

catboxanon commented 11 months ago

Likely same issue being experienced in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12937.

missionfloyd commented 11 months ago

I've seen this before with the pruned protogen models. Using the unpruned version fixed it.

miguel234457 commented 11 months ago

I've seen this before with the pruned protogen models. Using the unpruned version fixed it.

that is not available for my model and i hope that a actual fix is implemented rather than a band aid solution

w-e-w commented 11 months ago

unless this is actually another issue this is likely triggered when switching form XL -> XL model I don't believe this is related to lora

2023-12-05 00_13_21_248 explorer

in the screenshot about the last eight digit of the filename is the image hash, basically if those digits are different than the image is different

notice that the image in the pinx after switching to and back from a XL model is different from the previous generation

and switching from XL to 1.5 model and back actually resets it back to the original state

also earlier today AUTO seems to have found a possible fix

jmblsmit commented 11 months ago

This seems similar to this issue as well https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13516

Wladastic commented 11 months ago

This issue persists even without using lora, when generating img2img. When I start sdui it loads sd_XL_1.0 as an example but it keeps getting """NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. """ Errors until you generate an image in txt2img, when you switch back to img2img, it works all of a sudden. When switching models, same issue again until I generate a random image from txt2img.

Deleting the repo and cloning for some reason fixes this problem for a while. Maybe some caching errors? I have not read the code further, as I don't have the time.

Ren4issance commented 11 months ago

Having the issue here also. Not using LoRAs but Textual Inversions (SDXL embeddings.safetensors)