deepmagd / shenaniGAN

Work in progress
MIT License
1 stars 0 forks source link

stage 2 model implementation #34

Closed Devin-Taylor closed 4 years ago

Devin-Taylor commented 4 years ago
Devin-Taylor commented 4 years ago

@alecokas keep branch open please

Devin-Taylor commented 4 years ago

Taking a [1, 1, 1, 1] tensor and replicating it to form a [1,16,16,1] vector

On Thu, 7 May 2020, 23:51 Aleco Kastanos, notifications@github.com wrote:

@alecokas commented on this pull request.

In models/discriminators.py https://github.com/deepmagd/shenaniGAN/pull/34#discussion_r421815855:

  • x = self.conv_block_1(x, training=training)
  • x = self.conv_block_2(x, training=training)
  • x = self.conv_block_3(x, training=training)
  • x = self.conv_block_4(x, training=training)
  • x = self.conv_block_5(x, training=training)
  • x = self.conv_block_6(x, training=training)
  • x = self.conv_block_7(x, training=training)
  • res = self.res_block(x, training=training)
  • x = tf.add(x, res)
  • x = tf.nn.leaky_relu(x, alpha=0.2)
  • reduced_embedding = self.dense_embed(embedding)
  • reduced_embedding = tf.nn.leaky_relu(reduced_embedding, alpha=0.2)
  • reduced_embedding = tf.expand_dims(tf.expand_dims(reduced_embedding, 1), 1)
  • reduced_embedding = tf.tile(reduced_embedding, [1, 16, 16, 1])

I'm a little confused about what this is doing

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/deepmagd/shenaniGAN/pull/34#pullrequestreview-407863027, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIQWBGCTWI24RZMLPCGHGDRQMUNHANCNFSM4M3VKYIA .

Devin-Taylor commented 4 years ago

It's what we are doing on stage 1 one just copied for now. But yeah, should probably make sure we don't need to accumulate losses or something like that.

On Thu, 7 May 2020, 23:51 Aleco Kastanos, notifications@github.com wrote:

@alecokas commented on this pull request.

In models/discriminators.py https://github.com/deepmagd/shenaniGAN/pull/34#discussion_r421816092:

+

  • x = self.conv_block_8(x, training=training)
  • x = self.conv_2(x)
  • return x
  • def loss(self, predictions_on_real, predictions_on_wrong, predictions_on_fake):
  • """ Calculate the loss for the predictions made on real and fake images.
  • Arguments:
  • predictions_on_real : Tensor
  • predictions_on_fake : Tensor
  • """
  • real_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(predictions_on_real), logits=predictions_on_real))
  • wrong_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(predictions_on_wrong), logits=predictions_on_wrong))
  • fake_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(predictions_on_fake), logits=predictions_on_fake))
  • total_loss = real_loss + (wrong_loss + fake_loss) / 2

Have we decided that this is the correct way to combine the losses?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/deepmagd/shenaniGAN/pull/34#pullrequestreview-407863290, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADIQWBG2A7PTLJ4NE7HK4A3RQMUPFANCNFSM4M3VKYIA .

Devin-Taylor commented 4 years ago

Tried the "reply to this email functionality", clearly failed haha