-
I'm testing 3DETR in outdoor senerio (nuScenes 3D dataset), but met the same promblem as #28 . The metrics are all zero after training 90 epochs. I visualize the gt and pred boxes as follows. Also, I'…
-
Hello,
I am trying to run test a mode I trained and get mAP for custom IoU threshold,
The model is Co-DETR (detection)
I am calling the test-evaluator like this, I also added the iou_thrs pa…
-
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 …
-
keras-rcnn should provide a mean average precision (mAP) [Keras-compatiable metric](https://keras.io/metrics/) that can be used to evaluate the performance of a model during training.
-
Add a differentially private variant of the [metrics.average_precision_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html#sklearn.metrics.average_pre…
-
during evaluation, ap for one of the target classes gets better than 1, like 1.76, how is this possible? is there anything i'm missing from using the code?
flyfj updated
4 years ago
-
Hi, I think there is mistake in computing average precision as ap(i,1) = sum(precision)/queryClassNum. According to the formula, the denominator should be the non-zero items in precision vector rather…
-
Hi I'm attempting to reproduce the performance metrics of models using HuggingFace's Pipeline utility, but I'm encountering different results. Below is the Python code I used for testing:
```python…
-
Hey,
I have a question of understanding. I did a one-class classification and for that trained a model with MMDetection.
In the case of one-class classification I only know the positive samples…
-
auto_scale_lr = dict(base_batch_size=16, enable=True)
backend_args = None
data_root = '/data/luoyq/data/toutu/v3'
dataset_type = 'VOCDataset'
default_hooks = dict(
checkpoint=dict(rule='gre…