Closed kid134679 closed 3 years ago
You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template. The following information is missing: "Your Environment";
Could you provide full runnable code?
Currently many variables (e.g. cocoGt,cocoDt_c4
) are undefined.
The following code prints the same results for both evaluation, so I believe the issue does not exist:
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
from detectron2.config import get_cfg
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.utils.logger import setup_logger
from detectron2.data import MetadataCatalog
setup_logger()
cfg_c4 = get_cfg()
cfg_c4.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_C4_1x.yaml"))
cfg_c4.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_C4_1x.yaml")
predictor_c4 = DefaultPredictor(cfg_c4)
evaluator_c4 = COCOEvaluator("coco_2017_val", ("bbox",), False, output_dir="./output")
val_loader_c4 = build_detection_test_loader(cfg_c4, "coco_2017_val")
print(inference_on_dataset(predictor_c4.model, val_loader_c4, evaluator_c4))
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
cocoGt = COCO(MetadataCatalog.get("coco_2017_val").json_file)
cocoDt = cocoGt.loadRes("./output/coco_instances_results.json")
cocoEval = COCOeval(cocoGt,cocoDt,"bbox")
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
If you do not know the root cause of the problem, please post according to this template:
Instructions To Reproduce the Issue:
Check https://stackoverflow.com/help/minimal-reproducible-example for how to ask good questions. Simplify the steps to reproduce the issue using suggestions from the above link, and provide them below:
cfg_c4 = get_cfg() cfg_c4.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_C4_1x.yaml")) cfg_c4.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_R_50_C4_1x.yaml") predictor_c4 = DefaultPredictor(cfg_c4)
evaluator_c4 = COCOEvaluator("coco_2017_valid", ("bbox",), False, output_dir="./drive/MyDrive/output2/R50-c4") val_loader_c4 = build_detection_test_loader(cfg_c4, "coco_2017_valid") print(inference_on_dataset(predictor_c4.model, val_loader_c4, evaluator_c4))
cocoEval = COCOeval(cocoGt,cocoDt_c4,annType) cocoEval.params.imgIds = imgIds cocoEval.evaluate() cocoEval.accumulate() cocoEval.summarize()
Running per image evaluation... Evaluate annotation type bbox DONE (t=0.66s). Accumulating evaluation results... DONE (t=0.37s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.448 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.662 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.506 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.284 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.471 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.659 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.386 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.543 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.551 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.340 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.564 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.750
Detectron2 tutorial colab