Open havefunbb opened 3 years ago
def Regulation_loss(id_coeff,ex_coeff,tex_coeff,opt):
w_ex = opt.w_ex
w_tex = opt.w_tex
regulation_loss = tf.nn.l2_loss(id_coeff) + w_ex * tf.nn.l2_loss(ex_coeff) + w_tex * tf.nn.l2_loss(tex_coeff)
regulation_loss = 2 * regulation_loss/ tf.cast(tf.shape(id_coeff)[0],tf.float32) # *2 why???
return regulation_loss
tf.nn.l2_loss actually calculates sum(x**2)/2. We multiply it with 2 in order to compensate for the division inside the tf function.
OK,thanks
It's not necessary to multiply 2 in the code, if you really want to increase the Regulation_loss you could increase the loss weight in option.py. But why did you do this in your code?