-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
Currently the metrics…
-
**Describe the feature you'd like**
mAP is widely used for predicting the precision of object detection model. It could be a very helpful feature to be added.
**Describe the use cases of the featu…
-
## 🐛 Bug
In [unittests](https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/classification/test_precision_recall.py) sklearn's ```recall_score``` and ```precision_score``` is b…
-
### Discussed in https://github.com/orgs/ultralytics/discussions/17269
Originally posted by **ksv87** October 30, 2024
Why is AP considered this way? The P-R curve calculation doesn't look rig…
-
#### Issue Description
There is no evaluation class for object detection, correct me if I am wrong. I am thinking to have a evaluation method that calculates mean average precision to evaluate obje…
-
(Copying this bug report from the main coco metrics https://github.com/cocodataset/cocoapi/issues/678 )
Hi there,
**Describe the bug**
our detector does not output scores, thus we set all to …
-
keras-rcnn should provide a mean average precision (mAP) [Keras-compatiable metric](https://keras.io/metrics/) that can be used to evaluate the performance of a model during training.
-
# Metrics
Precision and recall were calculated to evaluate the performance of each model and compare the performance of various models. Confusion matrix plotted for better understanding.
## Precis…
-
**Metric's name**
Mean Forecast Error
**Metric's category**
Time Series
**Metrics formula**
**Describe the metrics use cases, and any relevant references.**
MFE measures the average dif…