lllyasviel / ControlNet-v1-1-nightly

Nightly release of ControlNet 1.1
4.47k stars 364 forks source link

Training Details for tile model #125

Open andriyrizhiy opened 8 months ago

andriyrizhiy commented 8 months ago

Hi! Awesome work! I have been trying to train the tile version for stable diffusion v2. Can you share some details (dataset, how many steps, etc)? I use a lion dataset and make pairs of

  1. random crop original images to 512x512
  2. resize these images to 128x128 and resize back to 512x512 to make resizing artifacts After that, I try to train the control net to generate the first image from the dataset with the controlling image from the second image from the dataset. I am using train_batch_size=1 with gradient_accumulation_steps=4. And I use learning_rate=1e-5. After 100k iterations, the result is bad, and it feels like nothing has been learned. Can you share your experience with training your tiling models, because in materials I only found the training of other models (not tiling)? I will be grateful for the information
geroldmeisinger commented 8 months ago

the only description I could find was on the CN 1.1 frontpage and all the comments linked thereof. note the original tile was resized to 64x64 as opposed to your 128x128. I recently started CN training myself and documented everything in my article on civitai. right now I'm training an alternative edge detection model, where I document every experiment. I think this could be useful for you. you might also want to take a look at the SD 2 CN models from Thibaud.

some things I learned:

is this correct as SD2 uses 768x768 images? (I don't know)

After that, I try to train the control net to generate the first image from the dataset with the controlling image from the second image from the dataset.

What does this mean? could please post some examples. the way you wrote it sounds as if you want to confuse your CN on purpose by showing it the wrong image :D

After 100k iterations, the result is bad, and it feels like nothing has been learned

with effective batch size 4 you should already see some effects after 25k images. please provide:

because in materials I only found the training of other models

should be pretty much the same

please share your experiences!

andriyrizhiy commented 8 months ago

Thanks for sharing your experience! I tried training with a bigger logical batch size, and it looks better!

is this correct as SD2 uses 768x768 images?

In my opinion, it doesn't matter because if everything is fine with the training script, then it should work on 512 as well

What does this mean? could please post some examples. the way you wrote it sounds as if you want to confuse your CN on purpose by showing it the wrong image :D

While training, I use an image with resizing artifacts as a control image. To help SD generate an image similar to the image with artifacts but at a higher resolution

zjysteven commented 2 months ago

I know it's been quite a while but want to share my experience in case it helps. I'm using diffusers's official example to train controlnet tile https://github.com/huggingface/diffusers/tree/main/examples/controlnet, and I'm using all memory saving techniques (e.g., gradient checkpointing, xformers memory efficient attention, 8bit adam, and fp16 mixed precision, which are all available options in that training script) to achieve an effective batch size of 256. I did observe the "sudden convergence" around 3k steps, but essentially it worked.


I uploaded my (workable) training script here https://github.com/zjysteven/controlnet_tile, in case anyone is interested. test_screenshot