Open patientx opened 1 year ago
i understand your issue, but i'm not going over all possible combinations - if hypertile with lcm results in bad quality, it is what it is unless you want to contribute. such edge cases take extreme amount of time for very little return.
Having the same out of memory problems in comfyui as well, so tried hypertile there, the output problem isn't present there. Just sharing my findings
i believe you and i'm sure its solvable, but i cannot dedicate time to it.
OK, No problem. Thanks for your hardwork. I found a workaround. Using SDE as a sampler is "slower" but still produces impressive results in just 4 steps.
when you report upstream, you can mark that issue here for tracking purposes.
Issue Description
When I use any sd 1.5 model I have with lcm's sd 1.5 lora and set the sampler to LCM as I must , the results are garbled if I enable HYPERTILE.
I have to use hypertile because normally with --medvram and all the optimizations I can use generating gives out of memory error as soon as the first generation or if I am lucky in second or third one. This is with 512x512.
Also if I don't use sampler LCM as suggested but something else like SDE or euler the output is ok.
Here are some sample generations and log.
model photon with settings I used model lyriel with settings I used
Some results :
some outputs with other samplers :
Version Platform Description
windows 10 , 16 gb ddr4-3000, nvme drive with ryzen 3600x cpu and rx 6600 gpu
Starting SD.Next Python 3.10.11 on Windows Version: app=sd.next updated=2023-11-10 hash=056b04dc url=https://github.com/vladmandic/automatic/tree/master Platform: arch=AMD64 cpu=AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD system=Windows release=Windows-10-10.0.19045-SP0 python=3.10.11 Setting environment tuning Torch overrides: cuda=False rocm=False ipex=False diml=True openvino=False Torch allowed: cuda=False rocm=False ipex=False diml=True openvino=False Using DirectML Backend
Relevant log output
Backend
Diffusers
Model
SD 1.5
Acknowledgements