Closed YangMengHsuan closed 3 years ago
Thanks for your reply and suggestions!
Currently I'm trying to reproduce the reported RevGrad in this paper, but I have struggled with this for a few days.
I have tried to reproduce it, too. However, I failed to get the same results as the paper mentioned. I think you can ask the students of professor Mingsheng Long for the code (http://ise.thss.tsinghua.edu.cn/~mlong/).
I have tried to reproduce it, too. However, I failed to get the same results as the paper mentioned. I think you can ask the students of professor Mingsheng Long for the code (http://ise.thss.tsinghua.edu.cn/~mlong/).
okay, thanks for your help! If I can reproduce the results, I will update here.
Hi @easezyc
After I contacted the MADA writer, he suggested me to follow this repo Now I can reproduce the reported results!
The main different are as following:
Thanks a lot.
Hi @easezyc , thanks for your great implementations. When I'm trying RevGrad in
pytorch1.0
, I have some questions. Would you help me?In the original paper, it said the optimizer was set as
momentum=0.9
. However, in line62, the optimizer will be created every iteration, which means momentum will be reset every time. https://github.com/easezyc/deep-transfer-learning/blob/cc97b7d248b7e7d9b187a3bae99eb560c458f89c/UDA/pytorch1.0/RevGrad/RevGrad.py#L62The
optimizer_critic
seems do notoptimizer_critic.step()
. https://github.com/easezyc/deep-transfer-learning/blob/cc97b7d248b7e7d9b187a3bae99eb560c458f89c/UDA/pytorch1.0/RevGrad/RevGrad.py#L63I tried to solve the questions, but I cannot reproduce the reported results. My modifications are below.
. . .
for i in range(1, iteration+1):
update learning rate during training