Closed abdulfatir closed 6 years ago
In InfoGAN.py line 146, the squared loss is being optimized as follows.
cont_code_est = code_fake[:, self.len_discrete_code:] cont_code_tg = self.y[:, self.len_discrete_code:] q_cont_loss = tf.reduce_mean(tf.reduce_sum(tf.square(cont_code_tg - cont_code_est), axis=1))
The code_fake vector is in [0,1] as it comes from a softmax non-linearity. The actual latent code being sent, however, is in [-1,1]. (See line 234).
code_fake
batch_codes = np.concatenate((batch_labels, np.random.uniform(-1, 1, size=(self.batch_size, 2))), axis=1)
Am I missing something?
Oh..
You are right. It must be code_logit_fake.
code_logit_fake
Thanks for your contribution.
I've changed code.
In InfoGAN.py line 146, the squared loss is being optimized as follows.
The
code_fake
vector is in [0,1] as it comes from a softmax non-linearity. The actual latent code being sent, however, is in [-1,1]. (See line 234).Am I missing something?