Open Laidawang opened 8 months ago
ComfyUI support LCM LoRA already.
Amazing speed, thanks for the reminder
For some reason LCM is not showing up in ModelSamplingDiscrete custom node ? Any tips welcome...
Image:
Make sure you update ComfyUI to the latest, update/update_comfyui.bat if you are using the standalone.
Make sure you update ComfyUI to the latest, update/update_comfyui.bat if you are using the standalone.
Am I doing anything wrong? I thought I got all the settings right, but the results are straight up demonic.
Is that just how bad the LCM lora performs, even on base SDXL? Workflow used v Example3.json
thanks!!
@IdiotSandwichTheThird Yes, that's expected behavior. It is for faster generation with some* trade-off with quality.
Combine it with hypertiling, there's a slight speed increase over the already high speed. I actually got better results on somebody's trained checkpoint (quality-wise) from civit than from the base model. Don't know why exactly. The custom model's speed was .4-.5s lower until batch size 4 where the regular one was only .1s difference between single image and batch size 8. Regular SDXL LoRAs worked with it too. I'm guessing the last 0.1s could be made up by measuring using the script in the link but hey.
The fastest results I got on Windows were at 1024x1024 batch size 8
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:03<00:00, 1.01it/s] Prompt executed in 6.50 seconds
I'm not going to complain about 0.1s vs a test chart. :-)
Tomorrow I'm going back to trying to get Torch built against CUDA 12.3 on Windows (they implemented lazy loading for cuda libraries) then take a shot at building TransformerEngine... integrating it is about as easy as autocast if I read things right and see if the auto fp8 stuff can get this down to under half a second per image lol.
Don't forget to restart ComfyUI after you update it to see "lcm" option under sampler.
Make sure you update ComfyUI to the latest, update/update_comfyui.bat if you are using the standalone.
Am I doing anything wrong? I thought I got all the settings right, but the results are straight up demonic.
Is that just how bad the LCM lora performs, even on base SDXL? Workflow used v Example3.json
I tried your json file, except for VAE (I use the one from the checkpoint loader) and it looks perfectly, no face deformations.
When I used this Lora and animatediff node to make video, there are some error info:
[AnimateDiffEvo] - INFO - Loading motion module mm_sd_v15_v2.ckpt
[AnimateDiffEvo] - INFO - Using fp16, converting motion module to fp16
[AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (48) greater than context_length 16.
[AnimateDiffEvo] - INFO - Injecting motion module mm_sd_v15_v2.ckpt version v2.
Requested to load BaseModel
Requested to load ControlNet
Loading 2 new models
unload clone 0
could not patch. key doesn't exist in model: diffusion_model.middle_block.2.in_layers.2.weight
could not patch. key doesn't exist in model: diffusion_model.middle_block.2.emb_layers.1.weight
could not patch. key doesn't exist in model: diffusion_model.middle_block.2.out_layers.3.weight
could not patch. key doesn't exist in model: diffusion_model.output_blocks.2.1.conv.weight
could not patch. key doesn't exist in model: diffusion_model.output_blocks.5.2.conv.weight
could not patch. key doesn't exist in model: diffusion_model.output_blocks.8.2.conv.weight
100%|██████████| 8/8 [02:01<00:00, 15.25s/it]
I checked that when removing the lora the 'could not patch' info didn't exits.
I had the same issue, when using this Lora and animatediffEvolved:
got prompt [AnimateDiffEvo] - INFO - Loading motion module ANIMATE_mm-Stabilized_high.pth [AnimateDiffEvo] - INFO - Using fp16, converting motion module to fp16 [AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (16) less or equal to context_length 16. [AnimateDiffEvo] - INFO - Injecting motion module ANIMATE_mm-Stabilized_high.pth version v1. Requested to load BaseModel Loading 1 new model could not patch. key doesn't exist in model: diffusion_model.output_blocks.2.1.conv.weight could not patch. key doesn't exist in model: diffusion_model.output_blocks.5.2.conv.weight could not patch. key doesn't exist in model: diffusion_model.output_blocks.8.2.conv.weight
Please some advice to solve the issue. Thanks
@zov-coder I made a lot of videos with LCM Lora. Finally, this error seems like an informational message. It doesn't affect the videos in any way.
any idea about support LCM LoRA? This will greatly increase the speed. diffusers allready support https://github.com/huggingface/blog/blob/main/lcm_lora.md