Closed mparmis closed 3 years ago
Thanks you. Pretrained models are only available for the Style-based Architecture and not the standard one?
We didn't perform too many experiments with the standard architecture for high-resolutions as the style-based one produced much better results, but you can try the recommended hyper-parameters we provided, they seemed to produce somewhat good results.
Thank you! I just wanted to clarify the steps to which I can run the pretrained model. I have downloaded the pre-generated dataset for CelebA from the link that's provided, is there any preprocessing that needs to be done on the data or can I just change the directories in the config file and run metrics/fid_score.py to get the results from the pretrained model?
just place the checkpoint in the designated directory and change the path in the config file, and you should be good to go
So I only need to run metrics/fid_score.py? where should I specify where the CelebA dataset is stored?
See an example here of how to calculate the FID using the model: https://github.com/taldatech/soft-intro-vae-pytorch/blob/main/style_soft_intro_vae/train_style_soft_intro_vae.py#L290
Thanks Daniel for being so responsive, I appreciate it. I'm having a problem with the dataset directories. I downloaded the CelebA256 dataset from the link you provided and in the fid_score.py. I tried changing the directory of path (line 517) to the directory that my dataset if stored, I also tried changing the real_img_save_path (line 581) to where the dataset is stored and created an empty directory for gen_img_save_path (line 580). However, none of these work and the length of my files directory is still 0 so I keep getting the Warning: "batch size is bigger than the data size. Setting batch size to data size" which is zero. Can you please tell me what the correct way of arranging these directories are?
For the code files, the correct structure of the directories should be the same as: https://github.com/taldatech/soft-intro-vae-pytorch/tree/main/style_soft_intro_vae (just git clone
).
For the data, you can place it anywhere and need to make sure you modify the correct config file (also when you run the code, you need to make sure you provide the correct config file). In the config file, e.g., https://github.com/taldatech/soft-intro-vae-pytorch/blob/main/style_soft_intro_vae/configs/celeba-hq256.yaml :
The important fields for the data loader are:
PATH: /mnt/data/tal/celebhq_256_tfrecords/celeba-r%02d.tfrecords.%03d
PATH_TEST: /mnt/data/tal/celebhq_256_test_tfrecords/celeba-r%02d.tfrecords.%03d
You need to make sure your files are in TFRecords format, please follow ALAE here: https://github.com/podgorskiy/ALAE#datasets
You need to download the images and then use dataset_preparation/prepare_celeba_hq_tfrec.py
to create the TFRecords files.
I hope you can get it to work.
I'm closing the issue, feel free to re-open it if you need further assistance.
Hi,
Can you please tell me if there is a pre-trained model available for soft_intro_vae or do we have to train it ourselves? I want to try it on CelebA 256x256 dataset.
Thank you