Center Net predicts (i) detection confidence scores (i.e. heatmaps) corresponding for two corners and centers, (ii) embeddings to "group" predicted individual points as boxes and (iii) offsets for better localisation of boxes. While using aLRP Loss, we collected (confidence score, box) pairs for each anchor/pixel from the detectors that we used in the paper. For top-down object detectors (anchor-based (e.g. RetinaNet) or anchor-free(e.g. FoveaBox as pixel-based)), these pairs do not need any "grouping" operation. As a result, for aLRP Loss, a single (confidence score, box) pair should correspond to each detection during training, and to train these bottom-up object detectors (e.g. CornerNet, CenterNet) using aLRP Loss, there are at least the following design challenges, which need to be resolved:
(1) In CenterNet, the points are grouped into boxes during inference by using embeddings, which are simultaneously learned by additional push and pull losses. So, the boxes are not determined in the current training pipeline of CenterNet. The points should be grouped as boxes during training to use aLRP Loss, which can be challenging.
(2) After grouping, there are still three confidence scores for a box originating from corner and center predictions, and they are averaged at inference. Similar to grouping, this should also be handled during training. I think this part is easier to handle since averaging is differentiable.
(3) Besides, the ground truth heatmap is augmented with the unnormalized Gaussians (see CornerNet). As a result, the positives and negatives are not split strictly. Current aLRP Loss design uses a clear split of positives and negatives. This should also be taken into consideration.
To sum up: I think this is an interesting problem, and may offer significant simplification to train bottom-up detectors also by unifying embedding learning with offset and corner/center prediction. However, as I see, the usage of aLRP Loss with bottom-up object detectors is not trivial.
Center Net predicts (i) detection confidence scores (i.e. heatmaps) corresponding for two corners and centers, (ii) embeddings to "group" predicted individual points as boxes and (iii) offsets for better localisation of boxes. While using aLRP Loss, we collected (confidence score, box) pairs for each anchor/pixel from the detectors that we used in the paper. For top-down object detectors (anchor-based (e.g. RetinaNet) or anchor-free(e.g. FoveaBox as pixel-based)), these pairs do not need any "grouping" operation. As a result, for aLRP Loss, a single (confidence score, box) pair should correspond to each detection during training, and to train these bottom-up object detectors (e.g. CornerNet, CenterNet) using aLRP Loss, there are at least the following design challenges, which need to be resolved:
(1) In CenterNet, the points are grouped into boxes during inference by using embeddings, which are simultaneously learned by additional push and pull losses. So, the boxes are not determined in the current training pipeline of CenterNet. The points should be grouped as boxes during training to use aLRP Loss, which can be challenging.
(2) After grouping, there are still three confidence scores for a box originating from corner and center predictions, and they are averaged at inference. Similar to grouping, this should also be handled during training. I think this part is easier to handle since averaging is differentiable.
(3) Besides, the ground truth heatmap is augmented with the unnormalized Gaussians (see CornerNet). As a result, the positives and negatives are not split strictly. Current aLRP Loss design uses a clear split of positives and negatives. This should also be taken into consideration.
To sum up: I think this is an interesting problem, and may offer significant simplification to train bottom-up detectors also by unifying embedding learning with offset and corner/center prediction. However, as I see, the usage of aLRP Loss with bottom-up object detectors is not trivial.