er-muyue / DeFRCN

MIT License
182 stars 43 forks source link

How to update the affine layer transformation A? #2

Closed huyphan168 closed 3 years ago

huyphan168 commented 3 years ago

I saw your pseudo-code on backward function of GDL. However it only return the input's gradient (backbone gradient) for learning in the backbone, meanwhile, the gradient of parameter layer A is None, so how the autograd can update the parameters of channel-wise weights without gradient?

er-muyue commented 3 years ago

In fact, the gradient of A is from RCNN (or RPN). If the model has correct loss functions and back-propagation, it won't be None.

Please refer rcnn#L78 and rcnn#L85 for more details.

er-muyue commented 3 years ago

Close it. Try to reopen it if you still have some questions.