Open guizilaile23 opened 2 years ago
Sorry, i check the code, it's my mistake, have some bug about the datasets. By the way, i fully understand your idea in paper, maximize the nuclear-norm, and the results is nearly the same with papers, it's just i don't know
but why not use 1/exp(-torch.mean(s_tgt))? i think it's have the same effect to maximize the nuclear-norm, and also make loss value positive.
i test 1/exp(-torch.mean(s_tgt)) as transfer loss, it's seems a little higher that directly use -torch.mean(s_tgt)
ie. office-home, from Real_World to Art, the target domain acc is 0.71982, and use -torch.mean(s_tgt), the target acc is 0.69551
Good Try! I think you are better.
Hi, first of all , thanks for the code and your paper, it's really excellent work. And during the debug, i found the loss value is negative, is that right? i debug the BNM in DA, the dataset was office31, source is amazon and target is dslr. the "Transfer loss" will be -0.8 and classifier loss will be 0.02 after 2000 iterations.
and also, i found that if we simple calculate the classifier loss, the target acc will reach 100% near 1800 iteration, which means if we cut the BNM loss, it will be no harm to transfer the net from source to target.
am i missing something? by the way, we are using pytorch1.9.0
We look forward to your reply. thanks again.