Open adamboazbecker opened 6 months ago
One way to give intuition is to go through concrete and instructive examples of ground truth and model output pairs, and measure the metric on each them. After, we can explain, when we vary some parameter of the ground truth/output pair, how does the metric vary.
For example, if we want to explain IoU loss for object detection, we may first give some concrete examples bounding box predictions on a picture with their corresponding metric. Then, building on the intuition, we can say, if the model predicts the boxes perfectly, the IoU is 1. As the overlap between ground truth gets smaller, the intersection gets smaller, so the metric gets smaller. As the union between the ground truth gets bigger (model predicts more/bigger boxes than ground truth), the metric also gets smaller.
What's a good way to communicate the intuition behind blended, aggregating, metrics?