Closed ghost closed 1 year ago
also in your CKPT file, seems like you only use 23 layers in openclip, whereas many models are for 24. why so?
Because in SD v2 implementation, they always skip the last layer:
https://github.com/Stability-AI/stablediffusion/blob/main/ldm/modules/encoders/modules.py#L201 https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml#L67
@liuliu some models are not working properly. eg. : https://civitai.com/models/3725/djz-jovian-skyship-v21
Evan in drawthings app it gives bas results ( even with fp32 attention ) let me know if you are albe to produce anything good with drawthings using this model
I used sd_v2.1_f16.ckpt It works fine for 512x512 images, but it really messes the image up for any other size. i suspect that something is unstable in attention or something.
do you know what we could do?
I used
DynamicGraph.flags = [.disableMixedMPSGEMM]