daijifeng001 / R-FCN

R-FCN: Object Detection via Region-based Fully Convolutional Networks
Other
1.25k stars 543 forks source link

There seems we do not benefit from OHEM module? #32

Closed xiaoxiongli closed 8 years ago

xiaoxiongli commented 8 years ago

Dear jifei: When I download your training experiment.zip from BaiduYun, I find that it seems we do NOT benefit from OHEM module. Actually it hurt the accuracy: For example, in your experiment log:

Without OHEM: rfcn_VOC0712_ResNet101_rpn_resnet101
------------------------- Iteration 105000 ------------------------- Training : accuracy 0.922, loss (cls 0.195, reg 0.0867) Testing : accuracy 0.892, loss (cls 0.284, reg 0.126)

With OHEM: rfcn_VOC0712_ResNet101_OHEM_rpn_resnet101 ------------------------- Iteration 105000 ------------------------- Training : accuracy 0.898, loss (cls 0.25, reg 0.118) Testing : accuracy 0.858, loss (cls 0.366, reg 0.162)

So, I feel confuse about such result, Would you please.... Why OHEM does not work...

HaozhiQi commented 8 years ago

This is because the displayed train/test accuracy and loss are calculated from the hard examples, which should be more difficult to classify. That's exactly the purpose of OHEM.

xiaoxiongli commented 8 years ago

Dear Oh233: yep! thank you!, I think you are right. And also I guess there is another reason : without OHEM, the bg_thresh_lo is set to 0.1 which means we just using negtive samples around positive samples, and with OHEM, the neuron network search for hard examples in the whole image. So if one image have two car and we just annotate one, then the other one is a "hard examples"... In this case, we need fix the wrong "hard example" by annotate the other car or just erase it.

By the way, Why your comment display a "Collaborator" in the top-right corner?^_^