libuyu / GHM_Detection

The implementation of “Gradient Harmonized Single-stage Detector” published on AAAI 2019.
MIT License
618 stars 111 forks source link

consulting some questions #20

Open dpengwen opened 5 years ago

dpengwen commented 5 years ago

Hi, Thanks for your great work and sharing code! I have some questions: 1) In line 47 of ghm_loss.py, you update it with the following code: self.acc_sum[i] = mmt self.acc_sum[i] + (1 - mmt) num_in_bin. According to https://github.com/libuyu/GHM_Detection/issues/14#issue-422548624, you explained 'self.acc_sum would consider not only samples in the current batch, but also its previous value'. However, according the updating of self.acc_sum in code or the updating equation (12) in paper, I think at the iteration t, the self.acc_sum in the i-th bin only depends on the previous self.acc_sum. Whether I miss something?

2) In https://github.com/libuyu/GHM_Detection/issues/4#issuecomment-458033785, you explained 'sum[i+1] = mmt sum[i] + (1 - mmt) num[i]', it seams that at each iteration t, the acc_sum in the (i+1)-th bin depends on the acc_sum in i-th bin, Is it wrong?

3) When I only use the ghm-c loss In the pixel-level classification in the segmentation task, the loss do not decrease, could you give me some suggestions?

Thank for you again and sorry for bothering you. Looking forward to your reply.

libuyu commented 5 years ago

1&2: The exponential moving average is just a widely used technique to keep a variable more stable during updating (see wiki). The SGD optimizer also adopts it and has a parameter of momentum. If mmt is 0.75 here, the new acc_sum will come from 75% of the last acc_sum and 25% of the new calculated sample distribution (num_in_bin). You just missed num_in_bin.

3: In segmentation work, I think you can simply use two fixed loss weights for pos/neg samples to balance the loss.

dpengwen commented 5 years ago

Thanks.