@eriklindernoren as always thank you for the amazing library!
Can you explain what this code is doing here for the encoder starting from the variable "mu"? Im not sure which model this corresponds to in the paper. Are you doing this merge layer as "hack" to make the latent code closer to normal to make training easier on the network?
Really curious on your interpretation of the merge layer...This seems like a good trick but I want to make sure I understand what is going on and the why behind it.
def build_encoder(self):
# Encoder
img = Input(shape=self.img_shape)
h = Flatten()(img)
h = Dense(512)(h)
h = LeakyReLU(alpha=0.2)(h)
h = Dense(512)(h)
h = LeakyReLU(alpha=0.2)(h)
mu = Dense(self.latent_dim)(h)
log_var = Dense(self.latent_dim)(h)
latent_repr = merge([mu, log_var],
mode=lambda p: p[0] + K.random_normal(K.shape(p[0])) * K.exp(p[1] / 2),
output_shape=lambda p: p[0])
return Model(img, latent_repr)
Concerning the training loop code I notice you do like the below when training the discrminator, but does this adversely affect performance? Isn't it better to feeds nets data randomly (e.g. send a minbatch with both positives and negatives in it and do train_on_batch once)?
Traceback (most recent call last):
File "D:/py_project/untitled/keras/Keras-GAN-master/aae/aae.py", line 189, in
aae = AdversarialAutoencoder()
File "D:/py_project/untitled/keras/Keras-GAN-master/aae/aae.py", line 36, in init
self.encoder = self.build_encoder()
File "D:/py_project/untitled/keras/Keras-GAN-master/aae/aae.py", line 71, in build_encoder
output_shape=lambda p: p[0])
TypeError: 'module' object is not callable
@eriklindernoren as always thank you for the amazing library!
Can you explain what this code is doing here for the encoder starting from the variable "mu"? Im not sure which model this corresponds to in the paper. Are you doing this merge layer as "hack" to make the latent code closer to normal to make training easier on the network?
Really curious on your interpretation of the merge layer...This seems like a good trick but I want to make sure I understand what is going on and the why behind it.
Concerning the training loop code I notice you do like the below when training the discrminator, but does this adversely affect performance? Isn't it better to feeds nets data randomly (e.g. send a minbatch with both positives and negatives in it and do train_on_batch once)?
Can you explain you rationale for this type of setup? Im NOT questioning your methods, just trying to understand.