AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.42k stars 27.01k forks source link

[Bug]: Images are messed up in the last generation step(s) (Euler a, Euler, LMS etc.) #7244

Closed bosbrand closed 1 year ago

bosbrand commented 1 year ago

Is there an existing issue for this?

What happened?

With any of my models, generated images are screwed up in the last step(s). I can see the generation doing great when I run a script, outputting every step until the last steps. Then it is as if there was a sort of sharpening taking place in certain places, most noticeably faces. I see sharpening, but it is more like distorting. In LMS this effect is most apparent, because there the problem areas are just made into glitchy mosaic in the last steps.

Steps to reproduce the problem

  1. Start stable diffusion
  2. Choose Model
  3. Input prompts, set size, choose steps (doesn't matter how many, but maybe with fewer steps the problem is worse), cfg scale doesn't matter too much (within limits)
  4. Run the generation
  5. look at the output with step by step preview on.

What should have happened?

The last step should improve on the ones before, except now it tends to ruin what was building up beautifully.

Commit where the problem happens

645f4e7ef8c9d59deea7091a22373b2da2b780f2

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--xformers

Additional information, context and logs

sample-00035 sample-00033

Greendayle commented 1 year ago

I haven't generated stuff for a while, and yesterday evening, I've updated webui to latest master, to try WD1.4 epoch 2.

And I've noticed that txt2img generation became way worse, even when trying to recreate past images. Interestingly enough with highres fix enabled, pictures look way better, and img2img works fine.

Something broke in past few months it seems.

AI-Casanova commented 1 year ago

Maybe check out #7077 ?

bosbrand commented 1 year ago

@AI-Casanova It also seems to do it with ckpts that are definitely not overcooked, it's just less apparent.

AI-Casanova commented 1 year ago

@bosbrand Just remembered that thread and thought it might give you a bit of extra info in your search. Cheers!

bosbrand commented 1 year ago

@AI-Casanova Thanks! I can try to train the exact same set again, with less steps and see how they compare.

JD1234JD1234 commented 1 year ago

I have this exact issue, 19/20 steps looks fine but as soon as it finalises the image gets distorted. almost like a broken VAE

ghost commented 1 year ago

I remember this happened when the VAE auto selection was bugged and was supposedly fixed, could be one week ago or even two. Not entirely sure if it just got better, or remained at the bugged-out stage, I think it got a bit better after the fix?.

edit: I made a new clean install and this time I put my VAE into the VAE folder instead just beside the models and everything is perfect now. My old install told me it loaded the VAE but probably didnt correctly. My new install however dont tell me it loaded a VAE but it works pretty well now (??? :D)

Hivemind11 commented 1 year ago

I've got the same issue.

alexbfree commented 1 year ago

I have the same issue. I ran some tests and even when checking out an earlier version of the codebase I could not reproduce the high quality images I had produced previously (even with identical prompt, settings, seed etc). The quality is much reduced.

It seems like the checkpoint/model has actually been damaged in some way; certainly the hash is different (compared to what was saved into PNG info previously). seems like a very serious bug.

haven't been able to exactly narrow down the conditions under which it happens, but seems to be more noticeable with .safetensors models than .ckpt files.

As an example here is an original image generated using a sample prompt that was in a SD tutorial, generated on 14th January with whatever latest a1111 code was at that time:

03011-1464117713-a beautiful billie eilish christina hendricks alluring instagram model in crop top, by guweiz and wlop and ilya kuvshinov and ar

Here is today's attempt to recreate the same image with the same prompt, seed, and settings, using the same model file.

00001-1464117713-a beautiful billie eilish christina hendricks alluring instagram model in crop top, by guweiz and wlop and ilya kuvshinov and ar

As you can see the quality is much worse.

The hi-res fix improves things a bit but still nothing like what we had before:

00002-1464117713-a beautiful billie eilish christina hendricks alluring instagram model in crop top, by guweiz and wlop and ilya kuvshinov and ar

In order to regenerate the image at a quality similar to the original, I had to redownload checkpoint models, re-do merges, and re-create the model. I was then able to generate something that looked similar quality to the original:

00003-1464117713-a beautiful billie eilish christina hendricks alluring instagram model in crop top, by guweiz and wlop and ilya kuvshinov and ar

My conclusion is that something in a recent code update for a1111 saved changes to the model that permanently broke it.

Which makes this a much worse bug than the title says; it's actually about model corruption not just generating bad images.

JD1234JD1234 commented 1 year ago

i think it has to be VAE related

Karuyo commented 1 year ago

Same problem for me, weird faces since my last update image

python: 3.10.6  •  torch: 1.13.1+cu117  •  xformers: 0.0.16rc425  •  gradio: 3.16.2  •  commit: [0a851508]  •  checkpoint: [13dfc9921f]

Hivemind11 commented 1 year ago

here is a comparison of before updating and after. https://imgur.com/a/6qT3NY5 xy_grid-0201-1487442364-lady of the lake, a fey woman partially submerged, arms outstretched (((presenting a long sword))), water dripping off of clothe this is before.

Hivemind11 commented 1 year ago

this is after without a vae https://imgur.com/a/W61Dg8U xy_grid-0204-1487442364-lady of the lake, a fey woman partially submerged, arms outstretched (((presenting a long sword))), water dripping off of clothe

Hivemind11 commented 1 year ago

This is after the update with the vae added back. you can see between the first and this one, the images generated are similar but not the same. https://imgur.com/a/u3IZ4As xy_grid-0205-1487442364-lady of the lake, a fey woman partially submerged, arms outstretched (((presenting a long sword))), water dripping off of clothe sorry about the multiple post. but is was the only way to add the pictures.

bosbrand commented 1 year ago

Another one where the problem is really visible: last and before last step: sample-00088 sample-00089

Greendayle commented 1 year ago

I did a small experiment.

Created completely new install

Installed 7fd90128eb6d1820045bfe2c2c1269661023a712 from scratch (it's a few months old, the version I've used for a long time)

downloaded https://huggingface.co/hakurei/waifu-diffusion-v1-3/blob/main/wd-v1-3-float16.ckpt

generated 4 pictures

update to master 2c1bb46c7ad5b4536f6587d327a03f0ff7811c5d

running with --reinstall-torch

Generated again

image Seems to be fine... Used negative and positive prompts, Euler a.

sinanisler commented 1 year ago

ok it was VAE :D

Nepherpitou commented 1 year ago

ok it was VAE :D

What do you mean? I have same problem. What I need to do with VAE?

bosbrand commented 1 year ago

It is not the vae, I have the problem with an without vae. I did a clean reinstall, put my models back, did a reinstall of torch and xformers, no cigar...

alexbfree commented 1 year ago

i'm not sure all the examples on here refer to the same issue. For example in the one I posted above it's not about the faces/VAE, the whole image is lower quality.

alexbfree commented 1 year ago

Seems to be fine... Used negative and positive prompts, Euler a.

It seems like there is some "corruption moment" that happens, that I and others have hit, that you haven't hit in your tests.

I'm pretty convinced it's a model corruption issue; I tried running the same generation on various commits of the codebase going back to December, and same bad results every time - the only thing that fixed it was redownloading and re-merging checkpoints/safetensors files - everything works fine after doing that, even on latest codebase.

bosbrand commented 1 year ago

@alexbfree you're right, I'm a little tired of folks muddling up this thread that don't even have the problem that we have.

sinanisler commented 1 year ago

--

bosbrand commented 1 year ago

@sinanisler why don't you fucking read the thread. That is NOT the problem we have. Shut up until you have figured out what the actual problem is.

bosbrand commented 1 year ago

I have figured out a nuance... When you use the save intermedate script there is a difference between saving denoised intermediate steps and saving according to the preview settings. The latter look way better, so there must be something going wrong in the denoising steps. When I look at the problem it looks like denoising steps are incomplete on a regular basis.

bosbrand commented 1 year ago

Okay, since people don't seem to read the entire thread or try to understand what the problem is a summary: -Images get messed up in the final steps

@alexbfree provided evidence that models get messed up. How can that happen and how can it be prevented? How can a model change if not in training?

JD1234JD1234 commented 1 year ago

@bosbrand what if it is something to do with VAE but not in the normal sense? yes i have the same problem with or without VAE on but maybe it's something in the automatic code related to VAE. 19/20 steps is fine and then its almost like the cfg goes to 1000 for the last step, seems like the same time a VAE would kick in.

Mich-666 commented 1 year ago

I'm pretty convinced it's a model corruption issue; I tried running the same generation on various commits of the codebase going back to December, and same bad results every time - the only thing that fixed it was redownloading and re-merging checkpoints/safetensors files - everything works fine after doing that, even on latest codebase.

This would be pretty easy to check then. Just look at last changed date of given file, try using it with latest commit and see it it changes.

Or are you implying the model is not loaded correctly to memory?

Personally, I think this also might be an issue with fp16 models if there was a change in computation of result. Another thing, there is default VAE now, what if it is added to models that doesn's specify one?

Worrah commented 1 year ago

noticed the same problem thing is, if you set the settings of the live preview to the "combined" instead of the default "prompt", it became to show the result kinda close to finish one. but. results shown with the prompt setting usually significantly better and seems like there is no way to duplicate this result as finish one

JD1234JD1234 commented 1 year ago

@Mich-666 what is this default VAE? how do we turn it off and where did you find this out? a hidden vae in the background somewhere sounds exactly like what's going on

Kamekos commented 1 year ago

Does anyone has a working fix ?

Mich-666 commented 1 year ago

If you are using xformers the result is non-deterministic, ie. different generations produce different results.

But from what I found, this is most likely broken model issue. Try running it through Model Toolkit and fix it: https://github.com/arenatemp/stable-diffusion-webui-model-toolkit

{to quote the relevant part: The WebUI expects a checkpoint to contain VAE, the UNET and CLIP in the model. If one is missing then it will continue using whatever was last loaded (unless you load it first, then its left uninitialized and you will probably see NaN related errors).

Meaning some uncomplete models might be prone to error as they are dependant on models you load before them. That also explains why changing models in WebUI might lead to different results sometimes as broken models usually take VAE or CLIP from the last model if theirs is missing. Models that were produced as result of heavy merging are also likely to have their CLIP positions broken. After fixing those models with mentioned toolkit (Fix broken CLIP position IDs option is in WebUI settings) I have never encountered those errors again.

JD1234JD1234 commented 1 year ago

@Mich-666 when I use the toolkit and clicked advanced I get this:

Architecture SD-v1 UNET-v1 UNET-v1-SD CLIP-v1 CLIP-v1-SD VAE-v1 VAE-v1-SD

Rejected UNET-v1-Inpainting: Missing required keys (1 of 686) model.diffusion_model.input_blocks.0.0.weight (320, 9, 3, 3) UNET-v2-SD: Missing required keys (64 of 686) model.diffusion_model.input_blocks.2.1.proj_in.weight (320, 320) model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight (640, 1024) model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight (640, 1024) model.diffusion_model.output_blocks.8.1.proj_in.weight (640, 640) model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight (320, 1024) ... UNET-v2-Depth: Missing required keys (65 of 686) model.diffusion_model.input_blocks.2.1.proj_in.weight (320, 320) model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight (640, 1024) model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight (640, 1024) model.diffusion_model.output_blocks.8.1.proj_in.weight (640, 640) model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight (320, 1024) ... SD-v2: Missing required classes UNET-v2 CLIP-v2 SD-v2-Depth: Missing required classes UNET-v2-Depth CLIP-v2 Depth-v2

not sure how exactly to fix it

Mich-666 commented 1 year ago

There is nothing wrong with that, "rejected" items only shows what the models doesn't contain. In other words, it's perfect SD1.x model with UNET, CLIP and VAE.

If you don't know what you are doing (you can actually exchange CLIP/VAE for different one on Advanced page), just use the basic page where Toolkit shows how the model can be optimized/if anything is wrong. Just save the file as FP16 and you are good to go. Manual says more about advanced setting in detail.

JD1234JD1234 commented 1 year ago

@Mich-666 Ok then i don't think the toolkit is a fix for the issue in this thread because I used the model that has the problem

MobiusR commented 1 year ago

I'm having this exact same issue. The last step (or two?) results in a pretty bad image, particularly faces. Very noticeable with Euler a, but not so much with the DPM++ samplers. EXTREMELY bad with LMS. I have checked a number of models with the toolkit, and everything looks fine, additionally I'm using symlinks for my .safetensors and none of the model files have been modified since I downloaded them, so I don't think there's a model corruption issue here. I've fudged with every setting under the sun to no avail. I suspect this is most likely just something janky in automatic1111.

I found a related thread here: [Bug]: Final Sampling step ruin the image, very visible in DPM2 a There was recently some discussion in there related to sampler issues in img2img, and some commits were made. Not sure if any of that is directly related to this, but it looks like it might be.

SebastiaanVW1984 commented 1 year ago

Very happy to found that more people have this issue, it has been annoying me so much, ruining a lot of renders.

I want to re-iterate a couple of super important things already mentioned in the thread that can hint to where the problem actually lies:

If you keep your preview settings at "prompt", you will see the image change in the last possible moment: Screenshot 2023-03-16 195741

But if you turn it to combined, then suddenly the image in the preview will match exactly the result you get at the end. That doesn't explain why the "prompt" setting looks so much better. But to me it feels like the image we are seeing on prompt is just not a good representation of what actually is being generated in the background.

I've used the "save intermediate images" extension to get the next to last frame, but in that extension is also a setting that I need to set to get a good looking image. Either "noisy" or "according to live preview subject setting" gives you the good images, which is weird cause you would expect the denoised to look better, so the issue might be in denoising: Screenshot 2023-03-16 200336

Another hint is that the "Anti-Burn"-extension actually doesn't help. Which is supposed to smooth over the last frames that are generated, but then still creates an images that is heavily distorted. Again hinting that this is not something that actually happens in the last step. It is just a difference between the preview and the result that is actually being generated.

Another thing that makes me suspect it's not the last step, but actually after the last step or continuous is that changing the amount of steps doesn't matter. It is something that is always visible at the very end.

So in short this issue exists and is pretty terrible, ruining my gens so much, but I don't think it's the last step. It has to do with what we are actually seeing when we select "Prompt" in live preview.

I have done tests with no negative prompt, but it seems to still happen there as well. So as far as I can see it's not that the "prompt" live preview simply ignores negative prompts. It has something to do with that and denoising. I'm not super technical, but maybe someone who is can use those tips to solve it. IF they do, please let us know!

ps. I've done a clean install, that did not fix the issue.

ps2. a very quick solve would be if there would be a setting that would allow me to just get the render preview setting image, (prompt, not combined). Then I think it could already fix the issue.

Kickermax commented 1 year ago

No one has found a solution? The same problem. Previously, everything was fine, but now it is. No reinstalling helps.

M-Shahadat commented 1 year ago

ps2. a very quick solve would be if there would be a setting that would allow me to just get the render preview setting image, (prompt, not combined). Then I think it could already fix the issue.

Have you been able to find a solution? @SebastiaanVW1984

janwilmans commented 1 year ago

I've seen remarks, that it's 'Just an LMS issue' but that is not the case, it happens on all samplers, but not in all images. Here is an example using 'Euler a'.

Have the same issue: image

checkpoint: abyssorangemix2_Hard.safetensors [e714ee20aa]

masterpiece,cowboy pretty face
Negative prompt: (bad-artist:1.0), (loli:1.2), (worst quality, low quality:1.4), (bad_prompt_version2:0.8), bad-hands-5,(NG_DeepNegative_V1_75T:1.3)
Steps: 10, Sampler: Euler a, CFG scale: 8, Seed: 901305577, Size: 512x512, Model hash: e714ee20aa, Model: abyssorangemix2_Hard
Template: masterpiece,cowboy pretty face
Negative Template: (bad-artist:1.0), (loli:1.2), (worst quality, low quality:1.4), (bad_prompt_version2:0.8), bad-hands-5,(NG_DeepNegative_V1_75T:1.3)

I can screenshot the before-last prefix image and it is very nice, but then I look at the output image and the face is ruined (objectively ruined, for example instead of two nose-holes I see a gaping mess)

I'm on 22bcc7b from Wed Mar 29 08:58:29 2023 +0300,

using these cmdline options:

set COMMANDLINE_ARGS=--no-half-vae --xformers

Is there anything more I can test to help gather more information on this issue?

janwilmans commented 1 year ago

While doing more experiments I found that if I remove the negative prompt, the effect is less (but not gone) and the output image looks better. If I just remove "(worst quality, low quality:1.4)" from the negative prompt that "mosaic" effect disappears, however some distortion still happens, but it is way less obvious. It could also be that the completely different image that is generated is just less susceptible to the 'mosaic' effect

Could it be that somehow in the last step the negative weights explode? or maybe invert?

janwilmans commented 1 year ago

Exploring the problem space, I went back to commit 423f22228306ae72d0480e25add9777c3c5d8fdf

commit 423f22228306ae72d0480e25add9777c3c5d8fdf (HEAD -> 30-oct-202)
Author: Maiko Tan <maiko.tan.coding@gmail.com>
Date:   Sun Oct 30 22:46:43 2022 +0800

    feat: add app started callback

using checkpoint: anything-v4.5.ckpt [fbcf965a62]

dragon's eye, close up, wide anime eyes
Negative prompt: bw, bad hands, (blurry:1.2), duplicate, (duplicate body parts:1.2), disfigured, poorly drawn, extra limbs, fused fingers, extra fingers, twisted, malformed hands, low quality
Steps: 10, Sampler: LMS, CFG scale: 7, Seed: 1591058834, Size: 512x768, Model hash: fbcf965a62, Model: move_anything-v4.5

screenshot of preview at 80%: image

Final output:

00012-1591058834-dragon's eye, close up, wide anime eyes

janwilmans commented 1 year ago

This is the same prompt, on hash 22bcc7be428c94e9408f589966c2040187245d81 (Mar 20, 2023), So it looks like the problem could

commit 22bcc7be428c94e9408f589966c2040187245d81 (HEAD -> master, origin/master, origin/HEAD)
Author: AUTOMATIC <16777216c@gmail.com>
Date:   Wed Mar 29 08:58:29 2023 +0300

    attempted fix for infinite loading for settings that some people experience

01031-1591058834

EfourC commented 1 year ago

I've been having the exact same problem, and it's incredibly obnoxious. I'm certainly not an expert, but when I have time in the next day or so I'm planning to try comparing this behavior to a different platform like ComfyUI. Maybe that can help narrow down to whether it's a problem in models or with the UI logic interfacing with them. (Edit: or a dependency as you mentioned)

EfourC commented 1 year ago

Quick update: My initial test with ComfyUI using the same venv as WebUI gave the same distortion (using your example's prompts and settings).

janwilmans commented 1 year ago

Thanks for checking. I'm going to see if I can find out exactly what versions of all python modules where used at the time of commit https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/423f22228306ae72d0480e25add9777c3c5d8fdf

janwilmans commented 1 year ago

ok, some progress, I wrote a script to generate a list of commands to install the python package versions that were available at the time of the current commit:

import json
import subprocess
from datetime import datetime
from urllib.request import urlopen

from distutils.version import StrictVersion

def get_package_versions(package_name):
    url = "https://pypi.org/pypi/%s/json" % (package_name,)
    data = json.load(urlopen(url))
    releases = data["releases"]
    result = dict()
    for release in releases.keys():
        #print (release)
        for entry in releases[release]:
            #print ("  ", entry["upload_time"], entry["filename"])
            #print ("  ", entry)
            # we assume the update_time of the first file is the upload time of the version is this package
            iso_date = entry["upload_time_iso_8601"].replace("Z", "+00:00")
            result[datetime.fromisoformat(iso_date)] = release
            #print (entry)
            break
    return result

# finds the last release before 'timestamp' 
def get_package_version_before(name, timestamp):
    package_versions = get_package_versions(name)
    result = ""
    for package_version in sorted(package_versions.keys()):
        if package_version > timestamp:
            break
        result = package_versions[package_version]
    return result

def get_commit_date():
    cmd = "git show -s --format=%cI HEAD"
    date = subprocess.check_output(cmd).decode().strip()
    return datetime.fromisoformat(date)

def get_pip_list():
    cmd = "pip list --format json"
    str = subprocess.check_output(cmd).decode().strip()
    piplist = json.loads(str)
    result = dict()
    for entry in piplist:
        result[entry["name"]] = entry["version"]
    return result

commit_date = get_commit_date()
print("commit date: ", commit_date)   
packages = get_pip_list()
for package_name in packages.keys():
    print ("pip install {}=={}".format( package_name, get_package_version_before(package_name, commit_date)))

the result for commit https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/423f22228306ae72d0480e25add9777c3c5d8fdf is:


commit date:  2022-10-30 22:46:43+08:00
pip install absl-py==1.3.0
pip install addict==2.4.0
pip install aiohttp==3.8.3
pip install aiosignal==1.2.0
pip install antlr4-python3-runtime==4.11.1
pip install anyio==3.6.2
pip install async-timeout==4.0.2
pip install attrs==22.1.0
pip install basicsr==1.4.2
pip install bcrypt==4.0.1
pip install beautifulsoup4==4.11.1
pip install cachetools==5.2.0
pip install certifi==2022.9.24
pip install cffi==1.15.1
pip install chardet==5.0.0
pip install charset-normalizer==3.0.0
pip install clean-fid==0.1.34
pip install click==8.1.3
pip install clip==0.2.0
pip install colorama==0.4.6
pip install contourpy==1.0.5
pip install cryptography==38.0.2
pip install cycler==0.11.0
pip install diffusers==0.6.0
pip install einops==0.5.0
pip install facexlib==0.2.5
pip install fairscale==0.4.12
pip install fastapi==0.85.1
pip install ffmpy==0.3.0
pip install filelock==3.8.0
pip install filterpy==1.4.5
pip install font-roboto==0.0.1
pip install fonts==0.0.3
pip install fonttools==4.38.0
pip install frozenlist==1.3.1
pip install fsspec==2022.10.0
pip install ftfy==6.1.1
pip install future==0.18.0
pip install gdown==4.5.2
pip install gfpgan==1.3.8
pip install google-auth==2.13.0
pip install google-auth-oauthlib==0.7.0
pip install gradio==3.8.1
pip install grpcio==1.50.0
pip install h11==0.14.0
pip install httpcore==0.15.0
pip install httpx==0.23.0
pip install huggingface-hub==0.10.1
pip install idna==3.4
pip install imageio==2.22.2
pip install importlib-metadata==5.0.0
pip install inflection==0.5.1
pip install Jinja2==3.1.2
pip install jsonmerge==1.8.0
pip install jsonschema==4.16.0
pip install kiwisolver==1.4.4
pip install kornia==0.6.8
pip install lark==1.1.3
pip install lazy_loader==0.1
pip install linkify-it-py==2.0.0
pip install llvmlite==0.39.1
pip install lmdb==1.3.0
pip install lpips==0.1.4
pip install Markdown==3.4.1
pip install markdown-it-py==2.1.0
pip install MarkupSafe==2.1.1
pip install matplotlib==3.6.1
pip install mdit-py-plugins==0.3.1
pip install mdurl==0.1.2
pip install multidict==6.0.2
pip install networkx==3.0b1
pip install numba==0.56.3
pip install numpy==1.23.4
pip install oauthlib==3.2.2
pip install omegaconf==2.3.0.dev1
pip install opencv-python==3.4.18.65
pip install orjson==3.8.1
pip install packaging==21.3
pip install pandas==1.5.1
pip install paramiko==2.11.0
pip install piexif==1.1.3
pip install Pillow==9.2.0
pip install pip==22.3
pip install protobuf==4.21.9
pip install pyasn1==0.4.8
pip install pyasn1-modules==0.2.8
pip install pycparser==2.21
pip install pycryptodome==3.15.0
pip install pydantic==1.10.2
pip install pyDeprecate==0.3.2
pip install pydub==0.25.1
pip install PyNaCl==1.5.0
pip install pyparsing==3.0.9
pip install pyrsistent==0.18.1
pip install PySocks==1.7.1
pip install python-dateutil==2.8.2
pip install python-multipart==0.0.5
pip install pytorch-lightning==1.8.0rc1
pip install pytz==2022.5
pip install PyWavelets==1.4.1
pip install PyYAML==6.0
pip install realesrgan==0.3.0
pip install regex==2022.9.13
pip install requests==2.28.1
pip install requests-oauthlib==1.3.1
pip install resize-right==0.0.2
pip install rsa==4.9
pip install scikit-image==0.19.3
pip install scipy==1.9.3
pip install setuptools==65.5.0
pip install six==1.16.0
pip install sniffio==1.3.0
pip install soupsieve==2.3.2.post1
pip install starlette==0.21.0
pip install tb-nightly==2.11.0a20221030
pip install tensorboard==2.10.1
pip install tensorboard-data-server==0.6.1
pip install tensorboard-plugin-wit==1.8.1
pip install tifffile==2022.10.10
pip install timm==0.6.11
pip install tokenizers==0.13.1
pip install tomli==2.0.1
pip install torch==1.13.0
pip install torchdiffeq==0.2.3
pip install torchmetrics==0.10.1
pip install torchvision==0.14.0
pip install tqdm==4.64.1
pip install transformers==4.23.1
pip install typing_extensions==4.4.0
pip install tzdata==2022.6
pip install uc-micro-py==1.0.1
pip install urllib3==1.26.12
pip install uvicorn==0.19.0
pip install wcwidth==0.2.5
pip install websockets==10.4
pip install Werkzeug==2.2.2
pip install wheel==0.38.0
pip install xformers==0.0.13
pip install yapf==0.32.0
pip install yarl==1.8.1
pip install zipp==3.10.0

These are 142 models, where at head there are 172, so +30, and ~100 of the 142 now have a newer version ;)

janwilmans commented 1 year ago

I tried to downgrade the python modules to older version, however, this did not work correctly, some 'already satisfied' dependencies are not downgraded automatically, so I will have to do this again from a clear venv.

For the record, I tried to re-generate the image anyway. This yielded a different problem: the GPU was not detected and/or the torch was reported "not compiled with cuda support". To work around this, I added --skip-torch-cuda-test and used the CPU instead. (it's just a test for completeness anyway)

Another example: image

dragon's eye, close up, wide anime eyes Negative prompt: bw, bad hands, (blurry:1.2), duplicate, (duplicate body parts:1.2), disfigured, poorly drawn, extra limbs, fused fingers, extra fingers, twisted, malformed hands, low quality Steps: 10, Sampler: LMS, CFG scale: 7, Seed: 1591058834, Size: 512x768, Model hash: 6030dabe, Model: anything-v4.5

I'm not sure why the image is different now, but the model hash is also different. However, I checked, the md5sum didn't change, it must be the way the hash is calculated that changed.

Furthermore in the current state, the GPU was not detected and CPU was used, so maybe that is an effect on the output?

$ md5sum.exe anything-v4.5.ckpt
9596e3cbdbc29fa8d4ec91551ff58dbc *anything-v4.5.ckpt
janwilmans commented 1 year ago

After a couple of more hours of experimentation, I cannot find any version where this actually works correctly. I'm at a loss how to proceed.

janwilmans commented 1 year ago

Tested with different commandline options:

set COMMANDLINE_ARGS=--no-half --no-half-vae --xformers --autolaunch
set COMMANDLINE_ARGS=--no-half-vae --xformers
set COMMANDLINE_ARGS=--autolaunch
set COMMANDLINE_ARGS=

no change. Still the same distorted output