Closed deepai-lab closed 3 years ago
Actually, both is right. Pytorch0.3 is implemented as 'Simultaneous Deep Transfer Across Domains and Tasks' ICCV2015. Pytorch1.0 is implemented as real Revgrad.
Hi @easezyc,
Thank you for your answer. If we have 3 domains, how can we use Revgrad? We would like to minimize the distance among the three source domains. Can you please help me? Thanks in advance.
class RevGrad(nn.Module):
def __init__(self, num_classes=31):
super(RevGrad, self).__init__()
self.sharedNet = resnet50(False)
self.cls_fc = nn.Linear(2048, num_classes)
self.domain_fc = nn.Linear(2048, 3)
def forward(self, data):
data = self.sharedNet(data)
clabel_pred = self.cls_fc(data)
dlabel_pred = self.domain_fc(data)
return clabel_pred, dlabel_pred
def train(model):
src1_data_iter = iter(src1_loader)
src2_data_iter = iter(src1_loader)
src3_data_iter = iter(src1_loader)
src1_dlabel = Variable(torch.ones(batch_size).long().cuda())
src2_dlabel = 2 * Variable(torch.ones(batch_size).long().cuda())
src3_dlabel = Variable(torch.zeros(batch_size).long().cuda())
......................................
......................................
I am confused with this part:
new_label_pred=torch.cat((src_dlabel_pred,tgt_dlabel_pred),0)
confusion_loss=nn.BCELoss()
confusion_loss_total=confusion_loss(new_label_pred,torch.cat((src_dlabel,tgt_dlabel),0).float().reshape(2*batch_size,1))
How can we do that?
I did not try adversarial training for more than two domains. I think you can refer to some other references, e.g., Task-Adversarial Co-Generative Nets.
Hi @easezyc ,
You provided both version of the implementation of RevGrad using Pytorch 0.3 and Pytorch 1.0.
In Pytorch 0.3 the code is like that `class RevGrad(nn.Module):
and in Pytorch 1.0 the code is like that:
My question is which method is correct? If both methods are correct, can you please explain a bit. Thanks in advance.