wasidennis / AdaptSegNet

Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)
847 stars 205 forks source link

I have a question about the loss_adv in your code ,can you spend a little time to help me ? #33

Open zhouyuan888888 opened 5 years ago

zhouyuan888888 commented 5 years ago

In your paper Learning to Adapt Structured Output Space for Semantic Segmentation loss_adv is designed to train the segmentation network and fool the discriminator by maximizing the probability of the target prediction being considered as the source prediction, you mean we should maximize the loss_adv for training segmentation network,right?if then, how about you maximizing the loss_adv in your code? `pred_target1, pred_target2 = model(images) pred_target1 = interp_target(pred_target1) pred_target2 = interp_target(pred_target2)

D_out1 = model_D1(F.softmax(pred_target1)) D_out2 = model_D2(F.softmax(pred_target2))

loss_adv_target1 = bce_loss(D_out1,Variable(torch.FloatTensor(Dout1.data.size()).fill(source_label)).cuda(args.gpu))

loss_adv_target2 = bce_loss(D_out2,Variable(torch.FloatTensor(Dout2.data.size()).fill(source_label)).cuda(args.gpu)) loss = args.lambda_adv_target1 loss_adv_target1 + args.lambda_adv_target2 loss_adv_target2 loss = loss / args.iter_size loss.backward()` these codes is about loss_adv in you project. thank you very much!

tarun005 commented 5 years ago

loss_adv is the loss that a target sample is classified as the source, so the segmentation network should minimize this loss to correctly confuse the discriminator (discriminator is already learning to correctly classify target and source samples.)