Closed 116705792 closed 6 years ago
I have these questions, too.
@116705792 other way around. The paper is finding a saddle point of the objective function E = predict_loss - lambda*domain_loss
(equation 9 in Ganin, 2016) by finding the parameters that minimize predict_loss
and maximize domain_loss
. Maximizing is the same as minimizing the inverse, so this is identical to finding the parameters that jointly minimize predict_loss + lambda*domain_loss
. The lambda
is implemented as l
in the gradient reversal layer, flip_gradient.py.
thank you very much! i m learn a lot
hey,I have some questions. firstly,in your code : total_loss=domain_loss + predict_loss. but in the paper total_loss= domain_loss - lamda*predict_loss. secondly,this is because GRL layer?