Open PaulHax opened 3 weeks ago
Hi Paul, sorry as you know things have been a bit chaotic lately so its taking me time to get back to you. We've got in the pipeline of work to add new Scorers to nrtk
which would support more metrics; our intern will be starting on this soon. However, right now none of our built-in methods would do this I don't believe.
Could I get something on our work calendar and we can discuss further? I might be able to get our intern on it sooner.
In case its helpful someday, just dumping my notes on the scorer
api I made while using it in NRTK Explorer.
actual
and predicted
parameters would be niceNRTK Explorer app has quiet a bit of code to massage the app datastructures for score
to support all the above comparision combos. https://github.com/Kitware/nrtk-explorer/pull/61/commits/910618915b10f5d56388f26a48a50b785969b22c#diff-e2419bbe5bf4620af45160c9f8c0f759e403764ace61b5d285844303972abf0eR7-R94
Maybe dis:
Category = Hashable
Confidence = float
Annotation = Tuple[Dict[Category, Confidence], AxisAlignedBoundingBox]
AnnotationGroups = Sequence[Sequence[Annotation]]
The predicted
parameter type is Sequence[Sequence[Tuple[AxisAlignedBoundingBox, Dict[Hashable, float]]]]
What should I put in Hashable part of the Dict? Cat ID. cat name, my own random ID? Vicente figured it out, but I was puzzled.
Because maybe there is an image with no ground truth annotatins in the dataset?
score(imageA, imageB)
functionSometimes I just want to score one image pair, not a whole batch (like in the lazy/async image prcessing pipeline in my dreams)
The nrtk_pybsm image transformer outputs an image with a different resolution from the input image. The pixel wise comparision breaks when scoring object detection model predictions on the transformed image against the ground truth. Would be nice if there was an example that "resized" the transformed image annotations to match the original image before passing to score
. Or should we be reszing the transformed image back to the original sized image?
Hello! I'm helping with the NRTK Explorer app. @vicentebolea is working on generating a score comparing 2 images here: https://github.com/Kitware/nrtk-explorer/pull/61
Sometimes the configured Object Detection Model outputs categories that are not categories in the COCO JSON.
nrtk.impls.score_detections.coco_scorer
errors, understandably, when that happens.nrtk.impls.score_detections.class_agnostic_pixelwise_iou_scorer
works just fine. But...Is there a way to get a score that takes into account the class/category of the annotations (independent of the COCO JSON)?