Open CN-BiGLiu opened 5 years ago
We use a trick to reverse the gradient before the gradient back-propagate from discriminator to the feature extractor. So we do not need to use -1 at the discriminator loss to train the feature extractor.
There is no input from the discriminator to the classification loss. According to the Pytorch auto-grad principle. There is no gradient from the classification loss even we back-propagate from the sum of the classification loss and the transfer loss.
Thanks for your answer, the trick is x.register_hook(grl_hook(coeff))
, is that right?
Yes
How about the tensorflow version? Where is the trick of reversing the gradient? Thanks a lot!
How about the tensorflow version? Where is the trick of reversing the gradient? Thanks a lot!
The Tensorflow version is under implementation. The trick of gradient reversing is in pytorch/network.py line 388. The grl_hook adds a grl layer between the ResNet CNN and the Domain Discriminator, which enables the update of the two adversarial players in one feedforward and backward propagation.
In the DANN model, after inserting a GRL between the generator and the discriminator, the gradient of domain loss w.r.t the feature extractor F will be multiplied by -1. But in CDAN model, the input of discriminator is the tensor product between feature vector and the predicted probability vector. So during backward propagation, the domain loss would have gradient with regard to both feature extractor F and classifier G. May I know how your algorithm computes the gradient of domain loss w.r.t. the predicted probabilities output by classifier G? Will the grl_hook also reverse the gradient of domain loss w.r.t. the classifier G? Thanks a lot!
In the DANN model, after inserting a GRL between the generator and the discriminator, the gradient of domain loss w.r.t the feature extractor F will be multiplied by -1. But in CDAN model, the input of discriminator is the tensor product between feature vector and the predicted probability vector. So during backward propagation, the domain loss would have gradient with regard to both feature extractor F and classifier G. May I know how your algorithm computes the gradient of domain loss w.r.t. the predicted probabilities output by classifier G? Will the grl_hook also reverse the gradient of domain loss w.r.t. the classifier G? Thanks a lot!
In pytorch/loss.py line 22, softmax_output = input_list[1].detach() This detaches G from the domain loss during back-propagation, so the domain loss will not be used to update classifier G.
I cannot undstand two things. I appreciate it if you can explain. (1)pytorch/loss.py line 33. entropy.register_hook(grl_hook(coeff))
, Why the entropy need this -1 hook? The grads passed back from the domain discriminator to the feature extractor have been inverted by using x.register_hook(grl_hook(coeff))
, Registering a -1 hook for the entropy confuses me. (2) I noticed that you use softmax_output=input_list[1].detach()
which blocks the grads from the discrininator to the classifier, but the entropy is obtained by loss_func.Entropy(softmax_output)
resulting to entropy.requires_grad=True. This means the grads can be back-propagated to the classifier through entropy (am I right?), What is this for?
I cannot undstand two things. I appreciate it if you can explain. (1)pytorch/loss.py line 33.
entropy.register_hook(grl_hook(coeff))
, Why the entropy need this -1 hook? The grads passed back from the domain discriminator to the feature extractor have been inverted by usingx.register_hook(grl_hook(coeff))
, Registering a -1 hook for the entropy confuses me. (2) I noticed that you usesoftmax_output=input_list[1].detach()
which blocks the grads from the discrininator to the classifier, but the entropy is obtained byloss_func.Entropy(softmax_output)
resulting to entropy.requires_grad=True. This means the grads can be back-propagated to the classifier through entropy (am I right?), What is this for?
I also feel strange about problem(1). Do you understand now?
In the DANN model, after inserting a GRL between the generator and the discriminator, the gradient of domain loss w.r.t the feature extractor F will be multiplied by -1. But in CDAN model, the input of discriminator is the tensor product between feature vector and the predicted probability vector. So during backward propagation, the domain loss would have gradient with regard to both feature extractor F and classifier G. May I know how your algorithm computes the gradient of domain loss w.r.t. the predicted probabilities output by classifier G? Will the grl_hook also reverse the gradient of domain loss w.r.t. the classifier G? Thanks a lot!
In pytorch/loss.py line 22, softmax_output = input_list[1].detach() This detaches G from the domain loss during back-propagation, so the domain loss will not be used to update classifier G.
But generally speaking, the domain loss needs to optimize the feature extraction network G
Thanks for implementation from Long, and there are two points confusing me