CompVis / taming-transformers

Taming Transformers for High-Resolution Image Synthesis
https://arxiv.org/abs/2012.09841
MIT License
5.7k stars 1.13k forks source link

debugging custom models #107

Open dribnet opened 3 years ago

dribnet commented 3 years ago

TL;DR: custom training is great! is there a good config or way to debug quality of result on small-ish datasets?


I've managed to train my own custom models using the excellent additions provided by @rom1504 in #54 and have hooked this up to clip + vqgan back propagation successfully. However so far the samples from my models are a bit glitchy. For example, with a custom dataset of images such as the following:

example

I'm only able to get a sample that looks something like this:

painting_16_06

Or similarly when I train on a dataset of sketches and images like these:

Sketch (40)

My clip + vqgan back propagation of "spider" with that model turns out like this:

sunset_ink1_15_01

So there is evidence that the model is picking up some gross information such as color distributions, but the results are far from what I would expect using a simpler model such as SyleGan on the same dataset.

So my questions:

mrapplexz commented 2 years ago

Is there an easy change to instead more lightly fine tune an existing model on my dataset?

I've managed to fine-tune an existing model with these steps:

  1. Download existing weights and config (e. g. https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/)
  2. Create directories <taming-transformers repo root>/logs/<some name>/configs and <taming-transformers repo root>/logs/<some name>/checkpoints
  3. Put downloaded last.ckpt file into newly created checkpoints directory
  4. Rename downloaded model.yaml file into <some name>-project.yaml and put it into configs directory
  5. Add these lines to the end of <some name>-project.yaml file. Don't forget to adapt some values like you did when training a model from scratch
    data:
    target: main.DataModuleFromConfig
    params:
    batch_size: 5
    num_workers: 8
    train:
      target: taming.data.custom.CustomTrain
      params:
        training_images_list_file: some/training.txt
        size: 256
    validation:
      target: taming.data.custom.CustomTest
      params:
        test_images_list_file: some/test.txt
        size: 256
  6. Run python -m pytorch_lightning.utilities.upgrade_checkpoint --file logs/<some name>/checkpoints/last.ckpt
  7. Run python main.py -t True --gpus <gpus> --resume logs/<some name> and the training proccess should be started :)
dribnet commented 2 years ago

Thanks heaps @mrapplexz - this is indeed working well for me. So far I'm surprised how powerful even 100 iterations of fine tuning is (I'll probably tweak the learning rate down, etc.) but this recipe was hugely helpful getting me unblocked!

Awj2021 commented 4 months ago

@mrapplexz @dribnet hi, Thank you for your amazing ideas, but there are some points confused me. When resuming the model, how to set the training steps? e.g., , I have 1M images.

Awj2021 commented 4 months ago

And I have another question as showed issues/93, If use different dataset (e.g., medical Image dataset) to finetune the method, the parameter disc_start = 0 showed in https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/ maybe not a good choice. But I am stilling training the model, so it's just a consumption.

matthew-wave commented 4 months ago

Is there an easy change to instead more lightly fine tune an existing model on my dataset?

I've managed to fine-tune an existing model with these steps:

  1. Download existing weights and config (e. g. https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/)
  2. Create directories <taming-transformers repo root>/logs/<some name>/configs and <taming-transformers repo root>/logs/<some name>/checkpoints
  3. Put downloaded last.ckpt file into newly created checkpoints directory
  4. Rename downloaded model.yaml file into <some name>-project.yaml and put it into configs directory
  5. Add these lines to the end of <some name>-project.yaml file. Don't forget to adapt some values like you did when training a model from scratch
data:
  target: main.DataModuleFromConfig
  params:
    batch_size: 5
    num_workers: 8
    train:
      target: taming.data.custom.CustomTrain
      params:
        training_images_list_file: some/training.txt
        size: 256
    validation:
      target: taming.data.custom.CustomTest
      params:
        test_images_list_file: some/test.txt
        size: 256
  1. Run python -m pytorch_lightning.utilities.upgrade_checkpoint --file logs/<some name>/checkpoints/last.ckpt
  2. Run python main.py -t True --gpus <gpus> --resume logs/<some name> and the training proccess should be started :)

Hello, thank you very much for your answer. It has been very helpful to me. I used Python - m pytorch lighting. utilities. upgrade_checkpoint -- file logs/must_finish/vq_f8_16384/checkpoints/last.ckpt After this command, CUDA error: out of memory is displayed, which confuses me. I am using the. ckpt file you linked to