Closed SuperSecureHuman closed 7 months ago
Here, expected style of the output
Sometimes, the output is same as the prev style
Sometimes, it just plain bad
@SuperSecureHuman Try fusing lora before the compilation, and unfusing then refusing lora in each swap. This functionality is tricky and I hope you understand the basic concept of CUDA memory management and CUDA Graph.
Got it, trying now
If you cannot get it work, try disabling CUDA Graph and increasing the batch size to increase GPU utilization.
And a LoRA may include other parts besides UNet. In such case, even text encoder and vae also need handling, and perhaps including other parts.
I'll try with my usecases, and getback if any help is needed with those componenets.
Thanks!
Hey
I am trying to swap lora of the compiled model with the sample code given in readme. I get this error
When I try to replace the weights myself, I am getting very bad outputs
My snippet of trying to swap weights
Then infer.