Thank you for the extensive experiments and reliable implementations which are hard to find in these days!
I have a few questions on CelebA-HQ 128x128 dataset preprocessing, which was mentioned in "A Large-Scale Study on Regularization and Normalization in GANs, Kurach et al., ICML 2019"
In the section 2.6 of the paper, authors mention that the images were preprocessed by running the 128x128x3 version of the code provided from PGGAN repository.
Can you give some detail on how exactly the "128x128x3 version" was implemented?
Two possibilities in my mind are that
(a) replace all the 1024's of the code with 128, or
(b) resize preprocessed 1024x1024x3 images (original CelebA-HQ images) to 128x128x3
My questions are:
If (a), can you provide us example code for reference?
If (b), can you tell us the resize method you used? (BILINEAR, ANTIALIAS, etc)
How did you split the images into training (27,000) and test (3,000) images? (e.g. sort by index and use first 27,000 images as training set and use last 3,000 images as test images)
Thank you for the extensive experiments and reliable implementations which are hard to find in these days!
I have a few questions on CelebA-HQ 128x128 dataset preprocessing, which was mentioned in "A Large-Scale Study on Regularization and Normalization in GANs, Kurach et al., ICML 2019"
In the section 2.6 of the paper, authors mention that the images were preprocessed by running the 128x128x3 version of the code provided from PGGAN repository.
Can you give some detail on how exactly the "128x128x3 version" was implemented?
Two possibilities in my mind are that
(a) replace all the 1024's of the code with 128, or (b) resize preprocessed 1024x1024x3 images (original CelebA-HQ images) to 128x128x3
My questions are:
Thanks again for your invaluable contribution!