lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.69k stars 862 forks source link

[Bug]: Forge UI not generating same images as Stable Diffusion A1111 UI #803

Open theonlyblbl opened 5 months ago

theonlyblbl commented 5 months ago

Checklist

What happened?

Hello, since I started using the Forge UI, I noticed small changes with the images generated with the Stable Diffusion A1111 UI. Using the same seed, sampler, CFG Scale, Model,... The result have small variations. Here is an example :

Using Forge : test3f

parameters

1girl, looking away, multicolored hair Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 123456789, Size: 750x1000, Model hash: 377c3165ed, Model: ModeleSemiKawai, Clip skip: 2, Version: v1.7.0

Using Stable Diffusion A1111 : test3pf

parameters

1girl, looking away, multicolored hair Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 123456789, Size: 750x1000, Model hash: 377c3165ed, Model: ModeleSemiKawai, Clip skip: 2, Version: f0.0.17v1.8.0rc-latest-277-g0af28699

For this example, the changes is the hair near her neck.

Steps to reproduce the problem

There are no particular steps, the prompt used is quite simple (even with a negative prompt, the issue remains)

What should have happened?

The images generated should be identical

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-06-09-22-22.json

Console logs

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-277-g0af28699
Commit hash: 0af28699c45c1c5bf9cb6818caac6ce881123131
Launching Web UI with arguments:
Total VRAM 4096 MB, total RAM 32472 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 Laptop GPU : native
Hint: your device supports --pin-shared-memory for potential speed improvements.
Hint: your device supports --cuda-malloc for potential speed improvements.
Hint: your device supports --cuda-stream for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated:  False
Using pytorch cross attention
Loading weights [377c3165ed] from D:\Test_AI\webui_forge_cu121_torch21\webui\models\Stable-diffusion\ModeleSemiKawai.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 0
Startup time: 10.0s (prepare environment: 2.4s, import torch: 3.5s, import gradio: 1.2s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.7s, load scripts: 0.6s, create ui: 0.2s, gradio launch: 0.4s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
To load target model SD1ClipModel
Begin to load 1 model
Moving model(s) has taken 0.00 seconds
Model loaded in 3.9s (load weights from disk: 0.3s, forge load real models: 3.1s, calculate empty prompt: 0.5s).
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  3129.2920904159546
[Memory Management] Model Memory (MB) =  1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  465.8783254623413
Moving model(s) has taken 0.31 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00,  1.21it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████████| 25/25 [00:19<00:00,  1.22it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  3102.9288091659546
[Memory Management] Model Memory (MB) =  159.55708122253418
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  1919.3717279434204
Moving model(s) has taken 0.98 seconds
Total progress: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:21<00:00,  1.14it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 25/25 [00:21<00:00,  1.22it/s]

Additional information

No response

shutarojp commented 5 months ago

same problem here, also random seed is not working, even you set -1 , but all image gen from same time, their face are same face.

cpz3501 commented 5 months ago

Same problem, has anyone figured out what is cousing this?

I'm looking into this at the moment, the only clue I've found so far is the names of my output files. Under the same settings and text2img generation, the naming pattern for [denoising] in forge is 0.6, while in A1111 it is 0.7.

Unfortunately, I'm not a tech person and don't know much about the basics of the GUI. The only thing I found were some posts about [denoising] not being relevant to text2img at all, so I'm not sure what to make of that.

thiagojramos commented 5 months ago

Is there any difference between the two images? Oo

theonlyblbl commented 4 months ago

Is there any difference between the two images? Oo

Maybe the pictures I have chosen are not the best :'). There are difference near the neck, there is hair behind the neck of the 2nd and none on the 1st.