Closed eburling closed 6 years ago
Putting the loss in a function seemed to have solved it for me.
# Compute VAE loss
def my_vae_loss(y_true, y_pred):
xent_loss = img_rows * img_cols * metrics.binary_crossentropy(K.flatten(y_true), K.flatten(y_pred))
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
return vae_loss
vae.compile(optimizer='rmsprop', loss=my_vae_loss)
Thanks, @mattochal !
Can confirm that this works, though we're not sure why yet.
That seemed to fix the same problem I was having, so thank you!
Unfortunately, I'm not any closer to understanding exactly why passing the loss term as a function resolves the issue. If anyone can provide any deeper insight, I'd greatly appreciate it.
I had the same issue, but wrapping the loss in a function and passing it as an argument to the compile function solved it. I would also like to understand why this solves it...
Exactly the same problem and the same solution for me as well, but I think it would be important to find out why that works
I also had the same problem, and solved by this. Thanks.
I originally had a custom layer which computes the VAE loss, and specified vae.compile(optimizer='rmsprop', loss=None)
This prevented fit_generator from working probably because fit_generator expects parameters of loss function as (y_true, y_pred). In this case, I use test_on_batch to avoid error. It also worked.
Thank you, it works. Why it works is still a mystery.
Had the same issue, working with TF2.0 (stable). Adding the loss explicitly did not work.
vae.add_loss(vae_loss)
Disabling eager exec solved it.
config = ConfigProto()
config.gpu_options.allow_growth = True
session = Session(config = config)
tf.compat.v1.disable_eager_execution()
@eburling not sure if it make sense to reopen, but would be really interesting to get a clue on why this happen.
Hi all,
Foremost, my theano and keras are up-to-date.
I am trying to adapt the keras VAE template
variational_autoencoder_deconv.py
for a non-MNIST unlabeled dataset. I am using 38,585 256x256 pixel training images and 5,000 validation images, so I can't go the easy route ofmnist.load_data()
and load all the images into memory, so I have resorted to using theImageDataGenerator
class along with theImageDataGenerator.flow_from_directory(...)
andvae_model.fit_generator(...)
methods. I have done my best to make sure the in/out of each layer are matching so that my input and output dimensions match, and have set the generator toclass_mode='input'
so that my target output is the same as my input. Unfortunately, I keep getting an error that tells me that my model is confused by the input image target, e.g. ValueError: ('Error when checking model target: expected no data, but got:', array([]) The code is included below, followed by the output and traceback.The output and traceback are below:
I recognize that the error is most likely due to me trying to shoehorn my data into a template that was purpose-built for MNIST data, but despite my best effort in following the traceback and scouring keras issues, I have been unable to get it right. The docs for
flow_from_directory(...)
suggest the use ofclass_mode=input
, i.e. input=target, for training autoencoders in this unsupervised setting, butflow_from_directory(..., class_mode=input, ...)
seems to be upsettingvae.fit_generator(...)
. Any thoughts as to why this could be?Thanks and all the best.