Closed IAmSuyogJadhav closed 5 years ago
When Keras evaluates the final score, it gives the mean of all values, so actually you will get the same score even with "axis=-1". The difference arises only when you use weighted loss. I tested it as well: I tryed both L2 formulas and get the same results (I recorded two metrics simultaneously and got the same values for each iteration during the training)
So, this is correct indeed. Will keep it then. Earlier when I was using keras.losses.mse
, it was giving an error regarding mismatch of shapes. Changing to this formula fixed it. Thank you for clearing my doubt (and for your other contributions to this repo as well!)🙌
Hi, when I add your l2 loss function
loss_L2 = K.mean(K.square(inp - out_VAE), axis=(1, 2, 3, 4))
It seems to give mismatch shape error. The output of the loss is array like [a, b, c, d]. I though the loss should be only 1 value instead. Can you help me with this?
I couldn't find out anything on the internet to calculate the MSE loss between input and the VAE output. I think this might actually be correct, but I am doubtful. Do you have a better solution?
The current code: