Jack000 / glid-3-xl-stable

stable diffusion training
MIT License
290 stars 36 forks source link

Static Noise with sample.py #16

Open chavinlo opened 1 year ago

chavinlo commented 1 year ago

Hello, I was trying out the out/inpainting method you mentioned in the readme.

After training for about 1000 steps, I tried out sample.py with inpainting. Input Images: dsa_mask dsa Output: image image

I have also read from other issues that at least 10k steps are necesarry, is there something wrong I could be doing?

Jack000 commented 1 year ago

if you just want to use the inpainting model, you shouldn't have to do any training. Download the pretrained inpainting model and use the CLI commands in the readme.

if you really do want to train a custom inpaint model on your own dataset, I strongly recommend resuming from my checkpoint instead of the base SD model.

chavinlo commented 1 year ago

if you just want to use the inpainting model, you shouldn't have to do any training. Download the pretrained inpainting model and use the CLI commands in the readme.

if you really do want to train a custom inpaint model on your own dataset, I strongly recommend resuming from my checkpoint instead of the base SD model.

Thanks for the recommendation, yeah it was pretty silly training it from scratch, I was using the Waifu Diffusion model. Btw, how many steps and images have you trained your inpaint model on? required resolution still is 512x512 right?

Jack000 commented 1 year ago

I just added some code to load the weights from an uninitialized SD model, should work better now if you want to train from a base SD model.

I trained for 100k steps at 256 batch size with the LAION aesthetic dataset, which took about a week on 8xA100. I trained with 512 resolution but it's an adjustable parameter.