Closed sauravsolanki closed 4 years ago
I solved this issues. I did mistake in calculating mAP. temp={} temp["image_id"]= image_id temp["score"] = round(confidence,3) temp["score"] = round(scores[i],3) left = max(0,left) top = max(0,top) height = min(im_height,top+im_height) width = min(im_width,left + im_width) temp["bbox"] = [left,top,width,height] temp["category_id"]= int(detection[1]) annotation_json.append(temp)
Hi everyone, I am using SSD_mobilenetV2 tensorflow model zoo for calculating mAP but not Object Detection API for mAP calculation. What I did is simply converted the annotation in json format as cocodataset.org says and use pythonCocoEval notebook present inside cocoapi dir for mAP calculation. I can not use API as I am working on quantisation model and IR files(OpenVino model) Here is the info: System information What is the top-level directory of the model you are using: models/research Have I written custom code (as opposed to using a stock example script provided in TensorFlow):No OS Platform and Distribution : Linux Ubuntu 18.04 TensorFlow installed from (source or binary): pip TensorFlow version (use command below):1.12 CUDA/cuDNN version:No GPU model and memory:No Exact command to reproduce:
Code to convert to frozen inference graph: first changed pipeline.config files as follow: post_processing { batch_non_max_suppression { score_threshold: 0 # edited to 0 iou_threshold: 0 # edited to 0 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SIGMOID } and then run the code: python object_detection/export_inference_graph.py \ --input_type image_tensor \ --pipeline_config_path ./ssd_mobilenet_v2_coco_2018_03_29/pipeline.config \ --trained_checkpoint_prefix ./ssd_mobilenet_v2_coco_2018_03_29/model.ckpt \ --output_directory inference_graph
Basic code for recording annotation: def annotate(out,frame,idx,labels,image_id): global annotation_json
return
I have checked the annotation.It is perfect. What I get after mAP calculation is: Running per image evaluation... Evaluate annotation type bbox DONE (t=2.62s). Accumulating evaluation results... DONE (t=1.10s). Average Precision (AP) @[ IoU=0.30:0.95 | area= all | maxDets=100 ] = 0.031 Average Precision (AP) @[ IoU=0.30 | area= all | maxDets=100 ] = 0.098 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.30:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.30:0.95 | area=medium | maxDets=100 ] = 0.003 Average Precision (AP) @[ IoU=0.30:0.95 | area= large | maxDets=100 ] = 0.076 Average Recall (AR) @[ IoU=0.30:0.95 | area= all | maxDets= 1 ] = 0.052 Average Recall (AR) @[ IoU=0.30:0.95 | area= all | maxDets= 10 ] = 0.052 Average Recall (AR) @[ IoU=0.30:0.95 | area= all | maxDets=100 ] = 0.052 Average Recall (AR) @[ IoU=0.30:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.30:0.95 | area=medium | maxDets=100 ] = 0.003 Average Recall (AR) @[ IoU=0.30:0.95 | area= large | maxDets=100 ] = 0.130
Code Links: https://drive.google.com/file/d/1GscmmQD-K5RtYrU3M55I8rkcABFzyw3j/view?usp=sharing using API from here: https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb