Open camoody1 opened 8 months ago
SDXL Lighting is faster than SDXL, but their weights are different, if you want to use SDXL Lighting+LayerDiffusion, I guess you need to retrain LayerDiffusion on SDXL Lighting.
I guess this might be due to SDXL Lightning's adoption of an additional discriminator during the training process, which operates on the original latent domain instead of transparency latent.
If you seek faster speed, other accelerated samplers such as LCM (https://github.com/luosiallen/latent-consistency-model) or TCD (https://github.com/jabir-zheng/TCD) may work.
@jabir-zheng Using the TCD and LCM loras didn't result in much better images, but using the lightning lora with a standard SDXL checkpoint gave pretty nice results.
Regarding the TCD lora, is there a particular Sampler and Scheduler we should be using in ComfyUI? I haven't been able to achieve similarly good looking results with my testing as compared to your samples. Maybe we're still missing something on the Comfy side?
helloļ¼about āthe lightning lora with a standard SDXL checkpoint ā can you give me a link? thanks
helloļ¼about āthe lightning lora with a standard SDXL checkpoint ā can you give me a link? thanks
Here is a link to the lightning loras. You just apply them like any other lora and use any standard SDXL checkpoint. https://huggingface.co/ByteDance/SDXL-Lightning
I tried LayerDiffusion using a ComfyUI workflow with an SDXL Lightning model and the results were really quite bad. When I switched to a standard SDXL model, the result was much better. Is this something you expect? Have you tried a lightning model yourself? Is there any way to improve the output for lightning checkpoints? I'm so addicted to their speed. š
Thank you for your work.