Closed sehyun03 closed 5 years ago
Hi,
Q1) I use the same hyperparameters for all adaptation methods. What worth noting is that the hyperparameters may differ according to different dataset.
Q2) I don't provide a configuration about weight loss because I find the results in the paper could be reproduced with a default weight 1.0. In effect, it's not difficult to weight loss by yourself in Caffe2.
weighted_loss= model.Scale(loss_to_be_weighted, loss_to_be_weighted, scale=weight)
could achieve your target. In this implementation, the weights for two-level adaptation are the weights of GRL in two levels, which is consistent with the original implementation in Caffe. The weights of 2-level domain classifiers losses and consistency loss are all 1.0.
Q3) I didn't change lr_mult in this implementation. You could change the learning rate for specific parameters in the examples in here.
Hello, I'm trying to reproduce your work but have several issues. (I'm new to CAFFE2)
Q1) Did you set the same hyperparameter for image level, image + instance level, image + instance + consistency loss setting each?
Q2) I can't not find your configuration about weight loss. On the original paper, weight(lambda) for image level, instance level and consistency loss are set to 0.1. I checked "DA_IMG_GRL_WEIGHT" and "DA_INS_GRL_WEIGHT", but it seems not the same one with lambda. Where can I find one or did you just set it to 1.0?
Q3) On implementation from Caffe (https://github.com/yuhuayc/da-faster-rcnn) , they set lr_mult 10 times higher for instance level domain classifiers, did you set any hyperparameters reponsible for lr_mult?