I get the following tensor size for all returned metrics (tp, fp, fn, tn, iou_score, recall, precision):
size ([batch_size, num_classes])
AFAIK, this gets me for iou_score, for example, the score of each prediction (row) per class. And If I average the tensor along its rows, I should get the mean value per class. Please correct me if this is wrong.
Because if that is the case, I am getting a score of 1.0 for all classes except the first two. Despite having differences in the target and prediction.
pred = predictions[0][4].to("cpu").numpy()
plt.imshow(pred)
I think that, because of this, I am getting very optimistic metrics despite it not being the case.
When switching to "multilabel" instead of "multiclass", the results make more sense. Can someone explain that please?
When trying with predictions and targets of shape (batch_size, num_classes, image_height, image_width):
I get the following tensor size for all returned metrics (tp, fp, fn, tn, iou_score, recall, precision): size ([batch_size, num_classes])
AFAIK, this gets me for iou_score, for example, the score of each prediction (row) per class. And If I average the tensor along its rows, I should get the mean value per class. Please correct me if this is wrong. Because if that is the case, I am getting a score of 1.0 for all classes except the first two. Despite having differences in the target and prediction.
iou_score:
iou_score for image 0:
iou_score[0]
tensor([0.9594, 0.5560, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000])plotting the target vs pred for class 4:
Target for class ID 4:
Prediction for the same class ID 4:
I think that, because of this, I am getting very optimistic metrics despite it not being the case. When switching to "multilabel" instead of "multiclass", the results make more sense. Can someone explain that please?