cccorn / AP-loss

The implementation of "Towards accurate one-stage object detection with AP-loss".
MIT License
175 stars 29 forks source link

Regression Error Formulation #5

Closed Ikhwansong closed 4 years ago

Ikhwansong commented 4 years ago

image

Hi, long time no see. sir!

with congratulating our second meet...

the last time, thank for your reply. Above, following the code, I have a new question for regression(localization) formulation.

Would you please refer the each line 166, 168, 174 and 176.

why did you limit upper bound of _regression_diffabs by using torch.le() and if _regression_diffabs don't satisfy the condition, why dose this subtract - 0.5/1.0 or meet torch.sign() ?

Thank you.

cccorn commented 4 years ago

We use the Smooth L1 Loss for the regression task. So the loss function is quadratic if the regression_diff_abs is less than 1; otherwise it is linear. Subracting 0.5 is for the continuity of this loss function. Torch.sign is the derivative of the abs function, w.r.t the box prediction.

Ikhwansong commented 4 years ago

How you are kind! I have understood. I will endeavor to apply this study into my paper which is going to be submitted on ACCV 2020. Thanks for your impressive contribution about computer vision.