Closed alexanderbrodko closed 7 months ago
Python version gives such results
@alexanderbrodko I get similar results. For me it only works well with this model (128x128): https://huggingface.co/NikolayKozloff/stable-diffusion-nano-2-1-ggml/tree/main
Same with the new .gguf
model format.
I found a good setting for img2img:
No TAESD
(crashes).
vae_decode_only
= false (otherwise it crashes).
Input and output image need to have the same size, otherwise strange artifacts.
strength
= 0.5.
More sample steps (30-40).
EULER_A
, DPMPP2S_A
or LCM
sample method.
Do not generate images with a size of 768*768 or bigger, otherwise it crashes.
Lora adapter seems to improve the result.
Tested with: v1-5-pruned-emaonly-f16.gguf
The issue should be fixed now. You can pull the latest code and give it another try.
It works so fine now. Thanks!
Also cat with blue eyes and strength=0.4, strongly differs from smooth one at README.md
Also cat with blue eyes and strength=0.4, strongly differs from smooth one at README.md
I will find some time to update the pictures in the readme, many things are different.
It works so fine now. Thanks!
I want to second that. Really great.
txt2img works fine for me but img2img gives blurry abstract images
original image
./sd.exe -m v2.bin -p "old two-storied american mansion entrance porch, bushes, second floor, door and windows nailed up with boards" -t 6 --sampling-method dpm++2mv2 --mode img2img -i Untitled.jpg --strength 0.2 --seed -1
--strength 0.7
I tried several images and different sampling methods, tried negative prompt "blur, blurry", all gives results like these. Model is 512-base-ema.ckpt (v2 base model, works fine for txt2img)
Full output