jiamings / wgan

Tensorflow Implementation of Wasserstein GAN (and Improved version in wgan_v2)
240 stars 82 forks source link

Thank you for posting this. I have some Tensorflow language candy to share. #3

Closed peter6888 closed 7 years ago

peter6888 commented 7 years ago

Thank you for sharing your code. It actually helped me a lot!

for ddx = tf.sqrt(tf.reduce_sum(tf.square(ddx), axis=1)) you can actually use ddx = tf.norm(ddx, axis=1), I tried this it's actually the same result for the Discriminator in mlp, I use tf.layers, that saves a lot of lines of code. As below.

def discriminator(x):
    with tf.variable_scope('discriminator'):
        nn_x  = tf.reshape(x, [tf.shape(x)[0], 28, 28, 1])
        conv1 = tf.layers.conv2d(nn_x, filters=64, kernel_size=4, strides=2, activation=leaky_relu)
        conv2 = tf.layers.conv2d(conv1, filters=128, kernel_size=4, strides=2, activation=leaky_relu)
        bn    = tf.layers.batch_normalization(conv2, training=True)
        flt   = tf.contrib.layers.flatten(bn)
        dense = tf.layers.dense(flt, 1024, activation=leaky_relu)
        logits= tf.layers.dense(dense, 1)
        return logits
jiamings commented 7 years ago

Thanks! It would be great if you could make a PR for this issue - I will merge it 👍