Open SatMa34 opened 1 week ago
The scheduler is copied from Diffusers, so it might not cause it. Does 'different' mean the quality or the content (pose, angle and composition etc.)? If the overall quality is similar, the content may be different by the random generator (on cpu or gpu) even with the same seed.
The scheduler is copied from Diffusers, so it might not cause it. Does 'different' mean the quality or the content (pose, angle and composition etc.)? If the overall quality is similar, the content may be different by the random generator (on cpu or gpu) even with the same seed.
I used sd3_minimal_inference.py to generate image prompt like “a cat holding a sign that says hello world”,it outputs only a cat,it seems like the text has been truncated,the model of sd3_medium_incl_clips_t5xxlfp16.safetensors and sd3_medium.safetensors I have tried and turns out the same result. and it is different from the results of diffusers.
The scheduler is copied from Diffusers, so it might not cause it. Does 'different' mean the quality or the content (pose, angle and composition etc.)? If the overall quality is similar, the content may be different by the random generator (on cpu or gpu) even with the same seed.
i mean the quality is quite different, it seems the source code of yours doesn't match the diffusers.
Using sd3_minimal_inference.py and diffusers to inference open source sd3 model can get totally different result, is it because of the scheduler? it seems that you use your own-written scheduler rather than the scheduler from diffusers.