Closed ZeroCool22 closed 2 years ago
Same thing happened with hlky fork, at some point color issues in img2img appeared when a previous version had zero color change between img2img generations and when I reported that something was off I would just get gaslighted saying "img2img will always have this problem"
Where is this from? Is this just an older release or has it been edited? If it's an older release do you know the date? Thanks!
https://github.com/sd-webui/stable-diffusion-webui/issues/457 This is the relevant issue on the hlky branch, and where/how it was ostensibly resolved
Where is this from? Is this just an older release or has it been edited? If it's an older release do you know the date? Thanks!
It's an old release, (some weeks or a month at max.) i luckily saved it, maybe AUTOMATIC1111 or someone else could take a look, and see why this doesn't have that problem and find a solution for the updated one.
https://github.com/sd-webui/stable-diffusion-webui/issues/457 This is the relevant issue on the hlky branch, and where/how it was ostensibly resolved
FYI that fix has already been applied to this repo via https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/135
It's an old release, (some weeks or a month at max.) i luckily saved it, maybe AUTOMATIC1111 or someone else could take a look, and see why this doesn't have that problem and find a solution for the updated one.
@ZeroCool22 do you by any chance have the commit hash for the version? If not could you supply a version without the models so we don't have to download multiple GBs? 😃 Thanks!
Since there seems to be a bit of confusion around what's happening with color correction, I just wanted to lay out what I know as a sort of timeline:
To my knowledge, with the latest master and color correction enabled, there should be no color skew (neither to cyan nor to magenta). The color of each image in the loop is reset to match the histogram of the original input image. If you are seeing a skew to magenta, confirm color correction is enabled in the settings.
That's not to say color correction is perfect. The remaining issues as I understand them are:
You really see both of these issues you have progressive rendering enabled in the UI: the image looks great as it's being generated, but then the saved file is somewhat dull and posterised.
I'm currently experimenting with the following to work around the above somewhat:
I'll put a link up to my hacky branch with the above soon. None of those workarounds are a silver bullet. Ultimately, we can't get perfect results unless the original magenta skew is resolved (and this seems to be common to all the forks). What could it be? Some asymmetry in the noise that is added to images?
sd-webui/stable-diffusion-webui#457 This is the relevant issue on the hlky branch, and where/how it was ostensibly resolved
FYI that fix has already been applied to this repo via #135
It's an old release, (some weeks or a month at max.) i luckily saved it, maybe AUTOMATIC1111 or someone else could take a look, and see why this doesn't have that problem and find a solution for the updated one.
@ZeroCool22 do you by any chance have the commit hash for the version? If not could you supply a version without the models so we don't have to download multiple GBs? 😃 Thanks!
I removed the model and GFPGAN, but it still around 6gb, what more i could remove to down the size?
- Color correct using the static LUTs provided by @Mozoloa in Bringing some help regarding the colour shift - Correction LUT - Need some testers ! #410 . Not sure how these are supposed to work - I guess the theory is they target the tiny magenta skew if it is consistently by SD (which is not my experience, how much magenta is introduced seems to depend at least on denoising strength).
I didn't realise the skew was denoising dependent, you might be right, but so far the lut has been helping a lot and I actually made a new one that may be better, posted it in my issue #410.
My workflow is very rarely to feed a generated image directly back into SD. I usually copy and paste into photoshop, and I keep the parts that I want, so every part I feed back to SD is actually a composite of the input and one or more of the outputs. I find this to be way better, and in doing this I can actually use a PS action that applies the lut to everything I paste into my canvas, and it's very convenient.
@ZeroCool22 you can remove the venv
folder.
That being said, maybe it is an important factor, could the problem be caused by an external dependency? a library used to process the images before export? and the venv
folder has an old version of it?
@Mozoloa you're absolutely right, that's a better workflow for many/most use cases. That said, loopback is great for quickly creating cool animations. 😄 Thanks for the LUTs.
@ZeroCool22 yes for future reference you can remove venv and model.ckpt which together would save ~11GB uncompressed.
In any case, the version you are referencing is after the original color correction logic was merged, but before the cyan skew bug was introduced & fixed, and before color correction was made optional and disabled by default.
My expectation is that the latest master with color correction enabled should be the same as your old version. If that's not the case it would be great to see some specific images showing the difference.
@Mozoloa you're absolutely right, that's a better workflow for many/most use cases. That said, loopback is great for quickly creating cool animations. 😄 Thanks for the LUTs.
@ZeroCool22 yes for future reference you can remove venv and model.ckpt which together would save ~11GB uncompressed.
In any case, the version you are referencing is after the original color correction logic was merged, but before the cyan skew bug was introduced & fixed, and before color correction was made optional and disabled by default.
My expectation is that the latest master with color correction enabled should be the same as your old version. If that's not the case it would be great to see some specific images showing the difference.
Here you have it: https://www.fireload.com/7ff5ed42973e4021/stable-diffusion-webui_Original.rar
PD: The attention.py is from neonsecret repo, but you have the original one zipped.
duplicate of #541 #481
That is not a duplicate of those issues, altho it speaks about the same original problem (that's only been worked around, introducing its share of problems), and the original question hasn't been answered as it was dependent on @ZeroCool22 providing the folder of the old commit where the shift didn't occur without any correction fix, for review by the devs
This was in an effort to find the very source of the problem, not adding a compensating filter to it, like I did with luts and the devs did with the histogram match post generation, as if we are to believe @ZeroCool22 here, everything was fine at some point (provided in his link), I revived the fire with #2754 and may actually try his build soon
After testing the provided build I can say two things:
The image seems fine when doing loopback instead but that's only because:
It's not in the settings as at the time it was applied to loopback only, if skimage was successfuly imported
do_color_correction = False
try:
from skimage import exposure
do_color_correction = True
except:
print("Install scikit-image to perform color correction on loopback")
for i in range(n_iter):
if do_color_correction and i == 0:
correction_target = cv2.cvtColor(np.asarray(init_img.copy()), cv2.COLOR_RGB2LAB)
p.n_iter = 1
p.batch_size = 1
p.do_not_save_grid = True
state.job = f"Batch {i + 1} out of {n_iter}"
processed = process_images(p)
if initial_seed is None:
initial_seed = processed.seed
initial_info = processed.info
init_img = processed.images[0]
if do_color_correction and correction_target is not None:
init_img = Image.fromarray(cv2.cvtColor(exposure.match_histograms(
cv2.cvtColor(
np.asarray(init_img),
cv2.COLOR_RGB2LAB
),
correction_target,
channel_axis=2
), cv2.COLOR_LAB2RGB).astype("uint8"))
p.init_images = [init_img]
p.seed = processed.seed + 1
p.denoising_strength = min(max(p.denoising_strength * denoising_strength_change_factor, 0.1), 1)
history.append(processed.images[0])`
So either @ZeroCool22 got confused by the hidden correction in loopback (and didn't notice the resulting banding issue), and didn't properly test manual img2img feedback, or he didn't provide us with right build
https://www.fireload.com/7ff5ed42973e4021/stable-diffusion-webui_Original.rar
UPDATED link. without venv and model.
So, if at that point, everything about color related was working fine, what introduced that problem? 💡 Loopback feature is there too, and it doesn't generate color problems when you use it with large iteration count images neither.
In the settings there wasn't a Color Correction neither.