AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
136.59k stars 26.01k forks source link

[Bug]: can not replicate same image again ! seed not work ? #12836

Open sinanisler opened 11 months ago

sinanisler commented 11 months ago

Is there an existing issue for this?

What happened?

everything was working fine couple day before I generated this image 00002-897252048

and today I am trying EVERYTHING same settings but this comes out

00044-897252048

why ?

Steps to reproduce the problem

who the hell knows what changed ? something automaticly updated ? I didn't update anything

What should have happened?

same image same generation

Version or Commit where the problem happens

1.5.2

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--medvram --no-half-vae

List of extensions

sd-webui-agent-scheduler

Console logs

venv "D:\SD\WebUI\stable-diffusion-webui\venv\Scripts\Python.exe"
fatal: detected dubious ownership in repository at 'D:/SD/WebUI/stable-diffusion-webui'
'D:/SD/WebUI/stable-diffusion-webui' is owned by:
        'S-1-5-32-544'
but the current user is:
        'S-1-5-21-115958371-3599349963-1937710522-1000'
To add an exception for this directory, call:

        git config --global --add safe.directory D:/SD/WebUI/stable-diffusion-webui
fatal: detected dubious ownership in repository at 'D:/SD/WebUI/stable-diffusion-webui'
'D:/SD/WebUI/stable-diffusion-webui' is owned by:
        'S-1-5-32-544'
but the current user is:
        'S-1-5-21-115958371-3599349963-1937710522-1000'
To add an exception for this directory, call:

        git config --global --add safe.directory D:/SD/WebUI/stable-diffusion-webui
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: 1.5.2
Commit hash: <none>

Launching Web UI with arguments: --medvram --no-half-vae
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Loading weights [565af52b8e] from D:\SD\WebUI\stable-diffusion-webui\models\Stable-diffusion\UnstableDiffusers_xlV4Grimorium-SDXL.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
INFO - [AgentScheduler] Task queue is empty
INFO - [AgentScheduler] Registering APIs
Startup time: 25.5s (launcher: 5.1s, import torch: 6.8s, import gradio: 2.4s, setup paths: 2.8s, other imports: 2.2s, setup codeformer: 0.1s, list SD models: 0.4s, load scripts: 2.9s, create ui: 0.9s, gradio launch: 0.9s, app_started_callback: 0.8s).
Creating model from config: D:\SD\WebUI\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 65.2s (load weights from disk: 3.4s, create model: 1.1s, apply weights to model: 56.2s, apply half(): 3.0s, calculate empty prompt: 1.5s).
100%|██████████████████████████████████████████████████████████████████████████████████| 32/32 [00:35<00:00,  1.10s/it]
Total progress: 32it [00:50,  1.57s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 32/32 [00:33<00:00,  1.04s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 32/32 [00:38<00:00,  1.22s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 32/32 [00:33<00:00,  1.05s/it]
Total progress: 32it [00:48,  1.50s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 24/24 [00:25<00:00,  1.08s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 24/24 [00:31<00:00,  1.31s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 24/24 [00:31<00:00,  1.03it/s]

Additional information

my setup is very native other than scheduled extension I don't install or change anything. using medvram because 3060 has 12gig ram sometimes it passes that medvram holds under 10gig usualy.

sinanisler commented 11 months ago

if it is updated I dont know how it is that possible. I installed webui with git pull and after that I just use it with running bat that's it really.

i tried a new install didn't work. installed latest release second image pops up again.

edit: oh my old install was from 25th this week.

w-e-w commented 11 months ago

one of your images with high res fix the other is not

sinanisler commented 11 months ago

one of your images with high res fix the other is not

problem is same. even I high res it it shows the same image.

@w-e-w

here same; 00000-897252048

it is so frustrating THERE SHOULD BE CHANGE WARNING

I remember similar thing happening on 1.5. 6 month ago wasn't able to solve that too it was a stupid problem just like this one...

THERE MUST BE CHANGE WARNING CHECK TO AVOID THIS... repeatability is everything without that all of this meaningless...

or maybe a putting more meta info on images like all the dependencies and versions to produce the image? I don't know... need a solution that's for sure.

Zhayton commented 11 months ago

I'm noticing the same issue on two images generated back to back, as well as one generated days ago vs now :/

w-e-w commented 11 months ago

I'm noticing the same issue on two images generated back to back, as well as one generated days ago vs now :/

if you're using certain cross attentiation optimization then non-deterministic is expected

sometimes the difference can be quite noticeable

sinanisler commented 11 months ago

I understand what you are saying but it needs to be deterministic this must be the aim for the UI in the first place.

even logging like setup dependencies after first installing logging the versions and locking it. show a warning if changes happen.

logging a version history check on all dependencies/setup would make this seed breaking changes stop.

this type of control or check would give us long-term deterministic results and protect the setup being RANDOMLY CHANGE.

Sharnoth commented 11 months ago

I am also getting slightly different images from time to time on sdxl models. Furthermore 2 different systems with the same copy of webui produce significantly different results when going past 1024x1024 resolution

That's consistent on several versions I tested starting from 1.5.0 and up to the latest commit a0af285 on the release-candidate branch

I've tried setting "Cross attention optimization" to "None" in settings, but that didn't change anything

i.e.

Prompt:

dog
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2012761744, Size: 1024x1024, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, VAE hash: 551eac7037, VAE: sdxl_vae.safetensors, Version: v1.6.0-RC-28-ga0af2852
amd cpu / rtx4070ti intel cpu / rtx4070
00041-2012761744-sd_xl_base_1 0-1024x1024-20-7-31e35c80fc-1 00043-2012761744-sd_xl_base_1 0-1024x1024-20-7-31e35c80fc-1

Prompt:

dog
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2012761744, Size: 1024x1200, Model hash: 31e35c80fc, Model: sd_xl_base_1.0, VAE hash: 551eac7037, VAE: sdxl_vae.safetensors, Version: v1.6.0-RC-28-ga0af2852
amd cpu / rtx4070ti intel cpu / rtx4070
00054-2012761744-sd_xl_base_1 0-1024x1200-20-7-31e35c80fc-1 00055-2012761744-sd_xl_base_1 0-1024x1200-20-7-31e35c80fc-1
Thom293 commented 10 months ago

Same issue on the a 5 day old build. I found the other thread and set checkpoints in ram to 2 but that didn't solve it either.

If I restart I can get duplicate images for a bit, but once I start changing models or LORAs everything gets crappy real fast. This isn't a deterministic v non deterministic issue. Loras and/or Models or both somehow become stuck in memory and apply to everything after. Eventually no matter what you switch to, everything had the same look. Guessing it's not clearning memory when prompt or model is changed.

sinanisler commented 10 months ago

@Sharnoth your situation a bit different 2 different PC 2 different install and 2 different configuration.

I opened this issue for same setup same pc same install same files same model same folders :)

bjornlarssen commented 10 months ago

Same problem. Running on Colab. I can't reproduce images. Sometimes they are similar-but-not-quite. Sometimes entirely different.

Whatever has changed, can I have a checkbox to click so that it changes back? I tried token merging, changing random number generator… everything, really. Same model, same hash, prompt, LoRAs, etc.

Left: 1.5.8, right, 1.6.0.

I probably wouldn't care much if not for the fact that the top left (1.5.8) is probably the closest I got to generating THE image I wanted to get… and now both the face and body (but not pose) have changed, but also the debris around (check out the the blue rubbish to the right in the top row in 1.5.8 vs no blue at all in 1.6.0.)

00639-1354954787_20230901184052-MASTERPIECE

sinanisler commented 10 months ago

looks like important issue is noise even though the seeds are same noise may start little bit different and that makes the final image tiny different.

most consistent images comes out of both same seed + same controlnet noise = most consistent result.

but of course at least for now far as I know there is no default state to use control net noise locking with same seed.

maybe we need longer seeds for better locking the same noise every time.

I will test this a bit.

bjornlarssen commented 10 months ago

I was thinking the same. Tried the GPU, CPU, NV generators. CPU of course changes everything entirely and since I'm on Colab, GPU and NV produce the same. I switched off and on the cross-attention optimisation. It might be just in my head, but the "new" images seem to be of slightly lower quality. (Compare the small pumpkins in the back of the Frankenstein image and the fluorescent green of the bottles in the bottom left corner.)

I think shuffle > none is as close as we get to controlling noise, right? (Except of course that doesn't help in this particular case…)

Thom293 commented 10 months ago

Try it after a fresh restart to see if you can duplicate. It's only after use/switching that things get crappy for me.

bjornlarssen commented 10 months ago

Oh, I have, many times… since I use Colab, and that thing still charges me when I stop (but not disconnect) the runtime, I restart at least once per day. Now that the RAM leak has been tamed it's 2-3 times per day, not 2-3 times per hour, but it doesn't matter whether the model is loaded or reloaded.

And of course now that people are creating 1.6.0 images they are happy with, there's going to be a need for a "pre-1.6.0/after 1.6.0" switch or the butching will never stop. I've got some 1.6.0 images now that I'm very happy with and might want to recreate… :)