Closed JPW0080 closed 1 month ago
You didn't even connect the model
part, you just applied the LoRA to the text encoders for the conditioning processing
The workflow was taken from, https://github.com/comfyanonymous/ComfyUI_TensorRT/blob/master/readme_images/image10.png Based on the image, The TensorRT Loader is connecting directly to the Ksampler model?
Yes, TensorRT model goes straight to the ksampler. If you don't put a lora in between the two, then there's no lora applied.
Loras can also affect the CLIP setup, and that is what you applied the lora to: CLIP. You didn't apply it to the TensorRT'd main model.
Understood.
Has anyone else noticed that the Efficient Loader Node appears to enable the loading LoRA's? Currently only tested static batch-1-512x512 on 1.5 realisticVisionV60B1_v51HyperVAE. The speedup is definitely still there on my 3090.
https://github.com/jags111/efficiency-nodes-comfyu
Notice the darkened tone from the nighttime_v1 LoRA:
Without LoRA: