kohya-ss / sd-scripts

Apache License 2.0
4.48k stars 755 forks source link

SD3 different inference result using sd3_minimal_inference.py and diffusers #1392

Open SatMa34 opened 1 week ago

SatMa34 commented 1 week ago

Using sd3_minimal_inference.py and diffusers to inference open source sd3 model can get totally different result, is it because of the scheduler? it seems that you use your own-written scheduler rather than the scheduler from diffusers.

kohya-ss commented 6 days ago

The scheduler is copied from Diffusers, so it might not cause it. Does 'different' mean the quality or the content (pose, angle and composition etc.)? If the overall quality is similar, the content may be different by the random generator (on cpu or gpu) even with the same seed.

lzran commented 5 days ago

The scheduler is copied from Diffusers, so it might not cause it. Does 'different' mean the quality or the content (pose, angle and composition etc.)? If the overall quality is similar, the content may be different by the random generator (on cpu or gpu) even with the same seed.

I used sd3_minimal_inference.py to generate image prompt like “a cat holding a sign that says hello world”,it outputs only a cat,it seems like the text has been truncated,the model of sd3_medium_incl_clips_t5xxlfp16.safetensors and sd3_medium.safetensors I have tried and turns out the same result. and it is different from the results of diffusers.

SatMa34 commented 4 days ago

The scheduler is copied from Diffusers, so it might not cause it. Does 'different' mean the quality or the content (pose, angle and composition etc.)? If the overall quality is similar, the content may be different by the random generator (on cpu or gpu) even with the same seed.

i mean the quality is quite different, it seems the source code of yours doesn't match the diffusers.