Closed xxlukexx closed 7 months ago
have you updated to the latest version? I think I fixed this already
I think I'm up to date, but might be doing something wrong with git as I'm still getting the error.
git log -1 gives me
commit 2255ae76602d5b122fa04d19bda650fa40899c02 (HEAD -> main, origin/main, origin/HEAD)
Author: matt3o <matt3o@gmail.com>
Date: Mon Nov 20 20:23:57 2023 +0100
fix compatibility with deep shrink
I need to see the workflow
Sure. This workflow is a WIP and a bit of a state. I've deleted all the unnecessary stuff for this error.
Generating from the current state causes the error for me. Bypassing the deep shrink node works.
Let me know if there's anything else I can send you, or anything you'd like me to test.
Cheers
are you using lcm sampler with lcm?
In that workflow I'm only patching in LCM for the upscale pass. The first pass (where the error occurs) isn't using it.
The error occurs both with LCM (so that's the LoRA and ModelSamplingDiscrete nodes) and without.
I'm sorry there are really too many variables in that workflow I would really appreciate if you could streamline it to the minimum number of nodes that actually cause the error... using default nodes as much as possible
The deep shrink downscale rate can be any value, not just 2.0. The current implementation cannot handle this.
it works in all the tests I've done but I'm sure there are cases that trigger this issue. I just need to understand which ones. I believe I know what needs to be fixed, but to be sure I need to be able to replicate it
This is a minimal workflow that is actually working no matter what values I enter. shrink.json
just let me know how to break it :smile:
Update: Okay, got it, there's a certain threshold to the downscale factor, 1.99 still works (maybe there's some rounding?)
I'm sorry there are really too many variables in that workflow I would really appreciate if you could streamline it to the minimum number of nodes that actually cause the error... using default nodes as much as possible
I can only apologise for my spaghetti workflow! :)
I'll try and reduce it as much as possible and see if I get the error. I won't get a chance to look at it until this evening but I'll update you on what I find.
Not because of the mask?
I'll try and reduce it as much as possible and see if I get the error. I won't get a chance to look at it until this evening but I'll update you on what I find.
don't worry, I've found the problem. I was lowering the factor by a too small of a value. If you set it to like 1.5 or 4 with a mask, it will breaks. It doesn't have the highest priority as it works with the default but I'll try to work on this issue or maybe laksjdjf will
This is only an issue when using the recently-added Deep Shrink node, everything works fine if that is bypassed or removed.