Open simonmecho opened 2 months ago
It's a bit strange indeed. Is there any error message in the terminal? or try .bin format model instead.
Actually I've already tried the .bin model, and in both cases there is no errors on the console, which is quite weird...
这是什么情况????怎么解决
uugf model or the normal one?
And the generation take forever is really slow
try using non-uugf format model with hyper-flux lora.
_Here is my workflow can you give it a check :: | there are 3 workflow | CNet Union | MistoLine | Xlabs | ::
It looks quite ok, have you ever tried our example workflow?
Yes same result
You guys using Lora with clip port or only use load only Lora weight?
I was using model only load lora, I bypass the lora node and the result is the same, the ksampler take a lot of time to render.
What kind of gpu are you using? And what is your PyTorch version? Rafael Nolasco @.***>於2024年9月4日 23:14寫道: I was using model only load lora, I bypass the lora node and the result is the same, the ksampler take a lot of time to render.
Loader 445.20s Ksampler 1943.20s
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
Hello, I'm trying the workflow using fp8 and also the gguf version, but in both case I just got result of noise only, what was the case for that? please see the attached screenshot. Thanks!