Hi, Thanks for open-sourcing this awesome work. I would like to train the model on my own dataset. So far, I have pre-processed all images to size 256x256 by using the scripts/dataset_tool.py. Here are the issues I met when trying to train on my own images:
How to generate image list? I used the following command to generate a list but not sure if this is correct, I actually didn't see the datasets/ffhq/ffhq_256.txt file when training on the FFHQ dataset.
But I'm not sure how to train on 32x32 images (I'd like a quick tryout), or changing the batch_size, etc. I looked into the tl2 library but failed to find any documentation.
Thanks for your time and any help would be appreciated!
Hi, Thanks for open-sourcing this awesome work. I would like to train the model on my own dataset. So far, I have pre-processed all images to size
256x256
by using the scripts/dataset_tool.py. Here are the issues I met when trying to train on my own images:datasets/ffhq/ffhq_256.txt
file when training on the FFHQ dataset.But I'm not sure how to train on 32x32 images (I'd like a quick tryout), or changing the batch_size, etc. I looked into the
tl2
library but failed to find any documentation. Thanks for your time and any help would be appreciated!