Open GoGiants1 opened 11 months ago
+) When enable_cuda_graph = False
, generated image is worse than not compile output.
Reference Image |
---|
Compiled pipe's output |
Not compiled pipe's output |
@GoGiants1 If you face quality problems with stable-fast
, you can tweak the config. For example, you can set prefer_lowp_gemm = False
and enable_fused_linear_geglu = False
.
@GoGiants1
I have inspected the code of StableDiffusionControlNetReferencePipeline
and found that what you said is right. those hacked_**
methods is not compatible with CUDA Graph, so please disable it.
@chengzeyi Ok. I would disable cuda graph option in reference pipe.
And prefer_lowp_gemm = False
and enable_fused_linear_geglu = False
options couldn't resolve quality degradation issue š¢ ..
Thank you for your answers!
@GoGiants1 Sorry to bother you. I wonder how you use the reference image to replace the background so wonderful. I have tried the StableDiffusionControlNetReferencePipeline but I found that the foreground object have been changed (sometimes with some strange appendages and cannot maintain the original text well). Could you share any tips or tricks with me?
@chengzeyi Ok. I would disable cuda graph option in reference pipe.
And
prefer_lowp_gemm = False
andenable_fused_linear_geglu = False
options couldn't resolve quality degradation issue š¢ ..Thank you for your answers!
It's really weird, maybe this is very rare case and most photos won't have this problem?
Hi! Thanks for your amazing work. I tested several pipelines and the speed of this framework is truly impressiveš„
However, I have encountered an issue when using the stable-fast setting with the
enable_cuda_graph=True
option and theStableDiffusionControlNetReferencePipeline
. It appears that the pipeline encounters a problem and causes a restart of the hardware(the entire instance in HuggingFace and the Colab session). Interestingly, when theenable_cuda_graph
option is set to False, everything seems to function correctly. When I tested on the A10G (vram 24gb) and T4 (vram 16gb) instances, the results were the same in all cases.Upon further investigation, I suspect that the
hacked_***
functions may be the root cause of this error.For your reference, I have provided a link to a reproducible code: Colab Notebook
Thank you!