henry32144 / pggan-tensorflow

A TF 2.0 implementation of Progressive growing of GANs
MIT License
15 stars 9 forks source link

Error when running the program #3

Open Devnary opened 3 years ago

Devnary commented 3 years ago

I runned the code normaly with 878 images of the CelebAMask-HQ dataset. 30k images take too lot of time with my CPU. I don't have GPU

......Traceback (most recent call last):
  File "PGGAN-Tensorflow.py", line 1253, in <module>
    WGAN_GP_train_d_step(generator, discriminator, image, alpha_tensor,
ValueError: in user code:

    PGGAN-Tensorflow.py:1142 WGAN_GP_train_d_step  *
        fake_mixed_pred = discriminator([fake_image_mixed, alpha], training=True)
    PGGAN-Tensorflow.py:292 call  *
        y = tf.reshape(inputs, [group_size, -1, s[1], s[2], s[3]])   # [GMHWC] Split minibatch into M groups of size G.
ValueError: Dimension size must be evenly divisible by 32768 but is 114688 for '{{node model_1/minibatch_stddev/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](model_1/conv2d_up_channel/compute_weights/conv2d_3/LeakyRelu, model_1/minibatch_stddev/Reshape/shape)' with input shapes: [14,4,4,512], [5] and with input tensors computed as partial shapes: input[1] = [4,?,4,4,512].

Versions

henry32144 commented 3 years ago

Hi,

I think the problem is because your batch_size setting, the batch_size should be divisible by (or smaller than) group_size defined in the MinibatchSTDDEV, and here the default of the group_size is 4.

class MinibatchSTDDEV(tf.keras.layers.Layer):
    """
    Reference from official pggan implementation
    https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py

    Arguments:
      group_size: a integer number, minibatch must be divisible by (or smaller than) group_size.
    """
    def __init__(self, group_size=4):
        super(MinibatchSTDDEV, self).__init__()
        self.group_size = group_size

    def call(self, inputs):
        group_size = tf.minimum(self.group_size, tf.shape(inputs)[0])     # Minibatch must be divisible by (or smaller than) group_size.
        s = inputs.shape                                             # [NHWC]  Input shape.

By the way, only 878 images may not be enough to train this model, especially when it grows the resolution. You may utilize Google Colab to train a model. Though it has the limitation of GPU usage. But, you can save the model regularly and remember to download the model to your PC (otherwise it will disappear when you disconnect from Colab).

I have a sample of this notebook on the Colab. You may try this. https://colab.research.google.com/drive/1SdfNdom68koJLdhl3wumjOOvPgfdBJV9?usp=sharing#scrollTo=LOsY-eRGT2wt

Devnary commented 3 years ago

Hi, well I didn't change anything in the code. The batch_size is 16 and the group_size is 4. I noticed that the number of images change something.

And thx, the Colab works 👍