Closed pourfard closed 5 years ago
@pourfard it don't matter about the location of bg\fg\unsure, the of convolution well get parameter to match the channel splited from trimap_softmax automatically.
I think it does when pre-train t-net, if the alpha was designed as background = 0, unsure = 128, foreground = 255
Hi, thanks for sharing the code. I've got confused with these lines. In dataset.py we have:
but when we split TNET output we have:
and in loss function it seems we are comparing wrong classes:
I think this line:
bg, fg, unsure = torch.split(trimap_softmax, 1, dim=1)
should be:
bg, unsure, fg = torch.split(trimap_softmax, 1, dim=1)
I asked this question because results are not as well as the paper reported, so I think something is wrong.