a514514772 / DISE-Domain-Invariant-Structure-Extraction

Pytorch Implementation -- All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation, CVPR 2019
GNU General Public License v3.0
145 stars 25 forks source link

deeplab optim_parameters not called. same lr rate for all layers #4

Closed gsujan closed 4 years ago

gsujan commented 5 years ago

Hi

You have two different learning rates for the shared encoder( deeplab model) in the optim params function in model.py

def optim_parameters(self, learning_rate): return [{'params': self.get_1x_lr_params_NOscale(), 'lr': 1 * learning_rate}, {'params': self.get_10x_lr_params(), 'lr': 10 *learning_rate}]`

But this function is not called during the optimizer initilzation and you load all parameters with one learning rate.

This is also different to the AdaptSeg code.

Is this on purpose ? Is this giving better results than using the seperate learning rates for layer1 to layer4 and a different one for layer5 and layer6

howard-mahe commented 5 years ago

Hi, I don't feel this is different from AdaptSeg code. During optimizer initialization, optim_parameters is called here. During training, the learning is set by adjust_learning_rate function.

gsujan commented 4 years ago

sorry for the late reply. You are right. Closing the issue.