eraserNut / MTMT

Code for the CVPR 2020 paper "A Multi-task Mean Teacher for Semi-supervised Shadow Detection"
103 stars 14 forks source link

Some error in your shadow detection results at test datasets #15

Open Jingwei-Liao opened 4 years ago

Jingwei-Liao commented 4 years ago

hi bro,we think you may upload wrong result on SUB, because we find the SUB result is the same as the SBU_crf result

Jingwei-Liao commented 4 years ago

And can you upload your pretrained shadow detection model, we need it to compare with my method on our own dataset

eraserNut commented 4 years ago

Q1: We generate the outputs again and get the similar results. The reason of the comparability of SBU and SBU_crf maybe lead by that we use the binarization operation (prediction = (prediction>90)*255) before CRF in SBU dataset. Because at that time, we observed that many pixels are positive but still under 127.5, we adjust it by binarization operation with bias. However, after the submission, we find that we can use the weighted BCE loss to make the balance this problem. Q2: Thanks for you advise, We will upload our pretrained model soon.

guanhuankang commented 4 years ago

Q1: We generate the outputs again and get the similar results. The reason of the comparability of SBU and SBU_crf maybe lead by that we use the binarization operation (prediction = (prediction>90)*255) before CRF in SBU dataset. Because at that time, we observed that many pixels are positive but still under 127.5, we adjust it by binarization operation with bias. However, after the submission, we find that we can use the weighted BCE loss to make the balance this problem. Q2: Thanks for you advise, We will upload our pretrained model soon.

hello, I wonder that whether you select some different thresholds on different dataset. For example, may you select (prediction>90) in SBU, and (prediction>x) in ISTD(x!=90)? Thanks!

eraserNut commented 4 years ago

Actually, we just do this binary operation in SBU. For UCF and ISTD, we save the soft output before crf.