Open Devnary opened 3 years ago
Hi,
I think the problem is because your batch_size
setting, the batch_size
should be divisible by (or smaller than) group_size
defined in the MinibatchSTDDEV
, and here the default of the group_size
is 4.
class MinibatchSTDDEV(tf.keras.layers.Layer):
"""
Reference from official pggan implementation
https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py
Arguments:
group_size: a integer number, minibatch must be divisible by (or smaller than) group_size.
"""
def __init__(self, group_size=4):
super(MinibatchSTDDEV, self).__init__()
self.group_size = group_size
def call(self, inputs):
group_size = tf.minimum(self.group_size, tf.shape(inputs)[0]) # Minibatch must be divisible by (or smaller than) group_size.
s = inputs.shape # [NHWC] Input shape.
By the way, only 878 images may not be enough to train this model, especially when it grows the resolution. You may utilize Google Colab to train a model. Though it has the limitation of GPU usage. But, you can save the model regularly and remember to download the model to your PC (otherwise it will disappear when you disconnect from Colab).
I have a sample of this notebook on the Colab. You may try this. https://colab.research.google.com/drive/1SdfNdom68koJLdhl3wumjOOvPgfdBJV9?usp=sharing#scrollTo=LOsY-eRGT2wt
Hi, well I didn't change anything in the code.
The batch_size
is 16 and the group_size
is 4.
I noticed that the number of images change something.
And thx, the Colab works 👍
I runned the code normaly with 878 images of the CelebAMask-HQ dataset. 30k images take too lot of time with my CPU. I don't have GPU
Versions