Open parkjh688 opened 3 years ago
Hi. Without exactly remembering all the details, I think this is why:
1) I usually add some small constant eps when going to the log domain, i.e. x_log = log(x+eps), which would mean that when going back to linear from log you would do x = exp(x_log) - eps
2) The notation with '' at the end is for the ground truth, so that y and y_lumlin corresponds to the ground truth image and luminance, respectively. y and y_lum_lin, on the other hand, refers to the reconstructed image and luminance, predicted by the network.
Hi. I'm reading the paper and can't understand some part of loss function. So I have questions about loss function code.
(1) Why didn't you add eps? y_lum_lin = tf.nn.conv2d(tf.exp(y)-eps, lum_kernel, [1, 1, 1, 1], padding='SAME')
(2) Why did you tf.log twice to y_lum? ylum = tf.log(y_lumlin + eps) y_lum = tf.log(y_lum_lin + eps) x_lum = tf.log(x_lum_lin + eps)