DevinBayly / thermal_imaging

access the log plotter here https://devinbayly.github.io/thermal_imaging/
https://devinbayly.github.io/thermal_imaging/
0 stars 0 forks source link

Research metrics used for evaluating the performance of computer vision programs like this #19

Open DevinBayly opened 2 years ago

DevinBayly commented 2 years ago

first article to explore by Venkatesh Wadawadagi

https://www.kdnuggets.com/2020/08/metrics-evaluate-deep-learning-object-detectors.html

Unsure whether this makes a difference but this is definitely in terms of neural networks.

the metric of use in many computer vision competitions is called average precision

another measure called the IoU measure stands for intersection over union overlap measure

False Positive as characterized by IoU

True Positive as characterized by IoU

False Negatives

Precision measures how accurate the predictions are or the percentage that are correct

= true positive/(true positive + false positive)

recall measures how good the method finds the positives = true positive/(true positive + False Negative)

Average Precision (AP) is the area under the precision-recall curve ? not entirely sure how this is plotted? image

mAP is the value calculated for all the classes

They recommend creating a validation dataset

Outlined tasks for Validation set

Outlined Tasks for Test set

In essence this piece describes a few metrics that are useful to determine whether the system is performing well, but there are a number of terms that are still unclear to me

Metrics

Unknown Terms

Their references

https://arxiv.org/pdf/1711.00164.pdf
https://www.researchgate.net/publication/343194514_A_Survey_on_Performance_Metrics_for_Object-Detection_Algorithms/link/5f1b5a5e45851515ef478268
https://github.com/ultralytics/yolov3/issues/898
https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/