liuliu / swift-diffusion

BSD 3-Clause "New" or "Revised" License
423 stars 33 forks source link

SD2.1 giving bad results on MPS #43

Closed ghost closed 1 year ago

ghost commented 1 year ago

I used sd_v2.1_f16.ckpt It works fine for 512x512 images, but it really messes the image up for any other size. i suspect that something is unstable in attention or something.

do you know what we could do?

I used DynamicGraph.flags = [.disableMixedMPSGEMM]

ghost commented 1 year ago

also in your CKPT file, seems like you only use 23 layers in openclip, whereas many models are for 24. why so?

liuliu commented 1 year ago

Because in SD v2 implementation, they always skip the last layer:

https://github.com/Stability-AI/stablediffusion/blob/main/ldm/modules/encoders/modules.py#L201 https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml#L67

ghost commented 1 year ago

@liuliu some models are not working properly. eg. : https://civitai.com/models/3725/djz-jovian-skyship-v21

Evan in drawthings app it gives bas results ( even with fp32 attention ) let me know if you are albe to produce anything good with drawthings using this model