Open QZLearning opened 3 years ago
Hi, Thanks for the great question and sorry if the paper gets you confused. The two bounds are our analyses of the problem. At the end of sec. 4, we concluded that the analysis leads to two concrete modifications: 1) using a strong proposal network (i.e., an one stage detector that directly outputs a reliable object score, not RPN that mainly designed to maximize recall); 2) multiply the proposal score to the final detection score. So the loss functions are the original losses for both stages: for the first stage, we use the CenterNet heatmap loss and the GIoU, for the second stage, we use the default softmax classification loss and regression loss.
Best, Xingyi
Hi, Thanks for the great question and sorry if the paper gets you confused. The two bounds are our analyses of the problem. At the end of sec. 4, we concluded that the analysis leads to two concrete modifications: 1) using a strong proposal network (i.e., an one stage detector that directly outputs a reliable object score, not RPN that mainly designed to maximize recall); 2) multiply the proposal score to the final detection score. So the loss functions are the original losses for both stages: for the first stage, we use the CenterNet heatmap loss and the GIoU, for the second stage, we use the default softmax classification loss and regression loss.
Best, Xingyi
Thanks for your great work! Is there any result about combining the score of two stages in the training phase?
Hi, Thank you for your great work. I'm wondering where is the definition of the loss function? As you claimed in paper, you jointly optimize two lower bounds for background samples. so, can you show me which part of the code implement this?