This is a repo of two scripts used to generate evaluation metrics from models trained using Tensorflow's Object Detection API
Calculates accuracy, IOU, and f1score for each class and saves a confusion matrix heatmap as shown above
This is based off an older version of the API which is the only version I got to work, but it should still work for newer versions if you have the files organized as below.
In reality, it will work as long as you have object-detection.pbtxt
, frozen_inference_graph.pb
, and a valid Test.csv
Submit an issue if there's a bug!!π
evaluate_test.py
python3 generate_evaluation_metrics.py
unifieddata.p
and category_index.p
. If you already have these two files, you can change MODE in generate_evaluation_metrics.py
to 2evaluation_metrics.csv
containing the evaluation metrics will be generated and a confusion matrix named confusion_matrix.png
will be saved to the root directoryevaluation_metrics.csv
will contain
class | class accuracy | class f1score | class iou | overall accuracy | average of valid f1scores | average iou
.
βββ training
β βββmodel_ckpt_folder
β βββ object-detection.pbtxt
βββ utils
β βββ label_map_util.py
β βββ visualization_utils.py
βββ model_name_folder
β βββ frozen_inference_graph.pb
βββ Test
β βββ1.jpg
β βββ2.jpg
β βββ3.jpg
β βββetc.jpg
βββ Test.csv
βββ evaluate_test.py
βββ generate_evaluation_metrics.py
utils
is from https://github.com/tensorflow/models/tree/master/research/object_detection
frozen_inference_graph.pb
is the graph of whatever object detection model you've trained
Test.csv
is a csv containing information about every image in the test set with the columns:
filename | width | height | class | xmin | ymin | xmax | ymax
numpy, tensorflow, pandas, pickle, matplotlib, PIL, os, sys