Open ghost opened 6 years ago
I'm not sure what "coco evaluate" is. Can you show us what code you are trying to run? I used the inspect_model.ipynb from the nucleus example to look at detection results for each test image, you can adapt it to your dataset.
Ryan Avery Graduate Student, WAVES Lab Department of Geography University of California, Santa Barbara
http://caylor.eri.ucsb.edu/people/avery/ http://caylor.eri.ucsb.edu/people/avery/
On Tue, Aug 7, 2018 at 3:11 AM, adarvit notifications@github.com wrote:
is it possible to calculate confusion matrix using coco evaluate? I would like to know for each detection in the image what is the GT label?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/matterport/Mask_RCNN/issues/841, or mute the thread https://github.com/notifications/unsubscribe-auth/AVOkCfWImlcAKEbRdHr00yDLD5bQslH8ks5uOT2pgaJpZM4Vxm2S .
@adarvit I'm curious if you ever found a good way to make a confusion matrix here?
If you care only about the correct prediction for ground truth label, this is an option to count:
from collections import defaultdict
import itertools
def count_prediction(gt_class_ids, pred_class_ids,
overlaps, class_names, threshold=0.5):
"""Count the number of ground truth objects are corrected classified.
gt_class_ids: [N] int. Ground truth class IDs
pred_class_id: [N] int. Predicted class IDs
overlaps: [pred_boxes, gt_boxes] IoU overlaps of predictions and GT boxes.
class_names: list of all class names in the dataset
threshold: Float. The prediction probability required to predict a class
"""
#inspect_class = defaultdict(set)
inspect_class = defaultdict(int)
gt_class_ids = gt_class_ids[gt_class_ids != 0]
pred_class_ids = pred_class_ids[pred_class_ids != 0]
thresh = overlaps.max() / 2.
for i, j in itertools.product(range(overlaps.shape[0]),
range(overlaps.shape[1])):
if overlaps[i, j] > threshold:
if gt_class_ids[j] == pred_class_ids[i]:
#inspect_class[class_names[gt_class_ids[j]]].add(overlaps[i, j])
inspect_class[class_names[gt_class_ids[j]]] += 1
return inspect_class
Output will show you the number of objects had been corrected detect by the model. Something like this
Output: defaultdict(int, {'people': 5, 'car': 3})
Here is a repo that could help : https://github.com/Altimis/Confusion-matrix-for-Mask-R-CNN
is it possible to calculate confusion matrix using coco evaluate? I would like to know for each detection in the image what is the GT label?