Open andreaceruti opened 2 years ago
@andreaceruti were you able to find a solution to this. As I am struggling to find F1 score. Please let me know if u have found something
@bhuvanofc you can see my personal solution in issue #572. Also I have found a very useful project and I have seen that you commented an issue of mine also on that repository. My solution and the method they use in review_object_metrics are very similar (only one TP of difference that could be due to a corner case!), the only problem is that they just use bounding boxes in order to calculate the iou. It could be great if they can add also polygons in their tool. If you are looking for a fast solution that uses COCOAPI just use my method written in #572
@andreaceruti Thanks for the reply. The method u used is only for one image. I need for many images. I tried the method mentioned in https://github.com/cocodataset/cocoapi/issues/572 . I am not completely sure if this is the right method. Do you have any idea if the method mentioned in the above link is right?
@bhuvanofc This is a snippet with more images, anyway It is better if you do some tests using coco library and your ground truth + detections file. Also this is my personal solution to this problem since I am also using COCO library and as you I have to do some evaluation on my master thesis project and I am struggling too. Another solution could be to use the Pascal VOC evaluator, or also if you use MaskRCNN API there are some very useful functions but we should investigate a bit more these tools and time passes ;)
custom_areaRng = [[0, 10000000000.0]] coco_eval = COCOeval(coco_gt, coco_dt, "segm") coco_eval.params.areaRng = custom_areaRng coco_eval.evaluate()
for ix, img in enumerate(coco_eval.evalImgs):
image_evaluation = coco_eval.evalImgs[ix] print("image id: {} ".format(image_evaluation["image_id"])) ign = image_evaluation["dtIgnore"][treshold_index] #detections from the model mask = ~ign #here we consider the detection that we can not ignore, so basically all the detections done n_ignored = ign.sum() #detections number n_ign += n_ignored tp += (image_evaluation["dtMatches"][treshold_index][mask] > 0).sum() fp += (image_evaluation["dtMatches"][treshold_index][mask] == 0).sum() n_gt += len(image_evaluation["gtIds"]) - image_evaluation["gtIgnore"].astype(int).sum()
recall = tp / n_gt precision = tp / (tp + fp) f1 = 2 precision recall / (precision + recall) print("precision: {}, recall:{}, and f1 score {}".format(precision, recall, f1))
@andreaceruti thank you very much for the code. I tried the same but i keep getting this error. Did you also get this error? I think image_evaluation = coco_eval.evalImgs[ix] in this line the eval_images does not accept indexing.
print("image id: {} ".format(image_evaluation["image_id"])) TypeError: 'NoneType' object is not subscriptable
@bhuvanofc No for me It works since evalImgs should be a dictionary of length (CategoryId AreaRng ImgIds), in my case I choose one areaRng and I have only one category. So evalImgs reduces to a dictionary on length equal to my dataset. Do you have more categories?
@andreaceruti you mean the category_id right? I have 0 and 1 category_id
If you have 2 classes in your dataset you have to change the script to adapt it to your case, that was just a snippet that in my case worked. Try with this for loop "for ix in range(len(coco_eval.evalImgs))" Anyway print somewhere len(coco_eval.evalImgs) and see If is equal to your test dataset length
@andreaceruti Thanks for the tip yes you are right I have two classes so the first 766 (length of my dataset) values are None the next 766 have values belonging to the class I want. I have just one more question. I have one clarification. treshold_index=1 for 0.55 and treshold_index=2 for 0.6 and so on?
@bhuvanofc yes sure, you can also change the treshold parameters if you want a iou treshold of 0.3/0.4/etc..
How can I calculate F1-score for instance segmentation task on my custom COCO dataset? I want to calculate it at every IoU treshold, at maxDets and at max area range. So diving into accumulate() function of COCOEval when should I take the precision and recall that are already calculated?