Closed jtara1 closed 1 year ago
z
" model shape was (1, 77, 768) and this was mismatching my CLIP model shape, zn
, (1, 77, 512). I used a CLIP model with projection_dim of 768 to fix. Changed code in aethestic_clip.py to pull and use "openai/clip-vit-large-patch14"
.@
operator. It may have been bc I was using an older aesthetic trained with different CLIP model and shape.
I was able to train to get my aesthetic gradient embedding, but this bug happens when I try txt2img using the embedding I just created. The training image was 512 x 512. My txt2img target resolution is 512 x 512.
End of stacktrace:
sd web ui version:
commit 983167e621aa55431f6dc7e0a26f021a66a33cd0
aesthetic-gradients version:
2624e5d (HEAD -> master, origin/master, origin/HEAD) use the new callback for script unloaded to stop the script from having effect after it's unloaded
I've also applied the manual patch to my
extensions/stable-diffusion-webui-aesthetic-gradients/aesthetic_clip.py
changing line 97 toaesthetic_clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
as suggested for the other bug fix in https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients/issues/21