Open eyildiz-ugoe opened 5 years ago
I have the same question. I created my own dataset in COCO-style and now i want to evaluate the results. I tried it with the code from coco.py, but at some points there are errors. Does somebody has a solution?
@eyildiz-ugoe were you able to find a function to do Coco style evaluation?
Not really no. Unfortunately everybody seems to keep such things to himself.
Not just mAP, but all the metrics that MS COCO defines- AP_Small, AP-medium, AR_10, AR_100, etc
See that PR for reference :D then write your own code
See #1024
Hello, if i understand this correctly, i am only able to show the mAP when i train my network. When i already trained my network, load it and load my Testset and want to see the mAP for one spezific image of my testset and for all images of my testset, how can i do that?
I also tried to use the COCO Metric, but it isn't working well. Here is my code: https://github.com/matterport/Mask_RCNN/issues/1466 Maybe someone can use it and fix it.
Hello, if i understand this correctly, i am only able to show the mAP when i train my network. When i already trained my network, load it and load my Testset and want to see the mAP for one spezific image of my testset and for all images of my testset, how can i do that?
Well, I'm using this code to calculating mAP score for my testset after I trained the network. Before that I have to annotate all images in my tetset folder
dataset_test = object.CustomDataset()
dataset_test.load_custom(custom_DIR, "test")
dataset_test.prepare()
APs = []
np.random.shuffle(dataset_test.image_ids)
for image_id in dataset_test.image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_test, config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs) * 100, "%")
Hello, if i understand this correctly, i am only able to show the mAP when i train my network. When i already trained my network, load it and load my Testset and want to see the mAP for one spezific image of my testset and for all images of my testset, how can i do that?
Well, I'm using this code to calculating mAP score for my testset after I trained the network. Before that I have to annotate all images in my tetset folder
dataset_test = object.CustomDataset() dataset_test.load_custom(custom_DIR, "test") dataset_test.prepare() APs = [] np.random.shuffle(dataset_test.image_ids) for image_id in dataset_test.image_ids: # Load image and ground truth data image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_test, config, image_id, use_mini_mask=False) molded_images = np.expand_dims(modellib.mold_image(image, config), 0) # Run object detection results = model.detect([image], verbose=0) r = results[0] # Compute AP AP, precisions, recalls, overlaps =\ utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks']) APs.append(AP) print("mAP: ", np.mean(APs) * 100, "%")
Thank you, i figured that out, too. This works for me.
Hello, if i understand this correctly, i am only able to show the mAP when i train my network. When i already trained my network, load it and load my Testset and want to see the mAP for one spezific image of my testset and for all images of my testset, how can i do that?
Well, I'm using this code to calculating mAP score for my testset after I trained the network. Before that I have to annotate all images in my tetset folder dataset_test = object.CustomDataset() dataset_test.load_custom(custom_DIR, "test") dataset_test.prepare() APs = [] np.random.shuffle(dataset_test.image_ids) for image_id in dataset_test.image_ids:
Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_test, config, image_id, use_mini_mask=False) molded_images = np.expand_dims(modellib.mold_image(image, config), 0) # Run object detection results = model.detect([image], verbose=0) r = results[0] # Compute AP AP, precisions, recalls, overlaps =\ utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks']) APs.append(AP)
print("mAP: ", np.mean(APs) * 100, "%")
I gent an mAP: 97.8595 something seems fishy, I wonder if this is correct
@salvador-blanco try the COCO Metric. I tried both ways and the COCO Metric gave me in most cases worse results compared to compute_ap().
Did anyone evaluate AP_S, AP_M and AP_L already? So with our own dataset should we switch to coco style and use coco.py?
Did anyone evaluate AP_S, AP_M and AP_L already? So with our own dataset should we switch to coco style and use coco.py?
Why not using coco.py? My results were not good, but i could evaluate AP_S, AP_M and AP_L with my solution by using coco.py as i suggested. I would recomand using coco.py, because you can then evaluate many things.
Why not using coco.py? My results were not good, but i could evaluate AP_S, AP_M and AP_L with my solution by using coco.py as i suggested.
It seems hard for your own dataset which not in coco label to evaluate them, i switch it to coco style but get caught as passing a parameter coco inside function evaluate_coco, did you use your own dataset and use that function correctly?
Hello, if i understand this correctly, i am only able to show the mAP when i train my network. When i already trained my network, load it and load my Testset and want to see the mAP for one spezific image of my testset and for all images of my testset, how can i do that? Well, I'm using this code to calculating mAP score for my testset after I trained the network. Before that I have to annotate all images in my tetset folder dataset_test = object.CustomDataset() dataset_test.load_custom(custom_DIR, "test") dataset_test.prepare() APs = [] np.random.shuffle(dataset_test.image_ids) for image_id in dataset_test.image_ids:
Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask = modellib.load_image_gt(dataset_test, config, image_id, use_mini_mask=False) molded_images = np.expand_dims(modellib.mold_image(image, config), 0)
Run object detection
results = model.detect([image], verbose=0) r = results[0]
Compute AP
AP, precisions, recalls, overlaps = utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks']) APs.append(AP) print("mAP: ", np.mean(APs) * 100, "%")
I gent an mAP: 97.8595 something seems fishy, I wonder if this is correct I wonder what is this mAP stands for, the mask or bounding box?
Why not using coco.py? My results were not good, but i could evaluate AP_S, AP_M and AP_L with my solution by using coco.py as i suggested.
It seems hard for your own dataset which not in coco label to evaluate them, i switch it to coco style but get caught as passing a parameter coco inside function evaluate_coco, did you use your own dataset and use that function correctly?
Sorry for my late reply. Yes, it was a bit tricky, but i was able to use my own dataset in coco style and evalute them. You can see trough my code.
@thepate94227 I didn't find your code.
Is there any function that automatically outputs AP, AP50, AP75, AP_S, AP_M and AP_L? These are the default metrics used in the paper and I would like to see the function which was used to achieve these. My question is, how could we evaluate the model which is trained on our own dataset. Is it pycocotools? Or does someone have an easier way of doing this?