Closed jongmokim7 closed 5 years ago
Sorry for the late response. I somehow missed this one.
1) We followed PSPNet kind of approach. However, you can use the one you are proposing. I don’t think it will make a huge difference.
2) give it a try.
@jongmokim7 This question is very interesting! But to some extend, there is no a proper explanation for that. In PSPNet, a weighted auxiliary loss which is set to be 0.4 is added to the training procedure and in BiSeNet two more auxiliary losses(their weight are both set to 1) are added to optimize the whole model. So you may see many excellent models use this training strategy while how do we understand it.
It should have some benefits at least:
I'm also a learner of DL and focus on semantic segmentation. Here, I agree with @sacmehta you need give a try about your questions.
Hi, thank you for your works and share.
I found there are two outputs from EESPNet_Seg output1 from level 4 (used in inference) output2 from level 2 (only used in training stage)
I'm wondering why did you use sum of 2 losses loss1 = criterion(output1, target) loss2 = criterion(output2, target) loss = loss1 + loss2
1) What if trying "loss = criterion(output1+output2, target)" and using "output1+output2" as a final segmentation output, which is similar to "skip layer" used in FCN-8s.
2) What if using one more (another) output from level 3.
If you have already tried those combination, can you inform the details you tried and explain the reason why you didn't use that? If not, what do you think about that concept? what can we expect?