openvinotoolkit / openvino_xai

OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models
https://openvinotoolkit.github.io/openvino_xai/
Apache License 2.0
29 stars 9 forks source link

Insertion Deletion AUC metric #56

Closed GalyaZalesskaya closed 3 months ago

GalyaZalesskaya commented 3 months ago
codecov[bot] commented 3 months ago

Codecov Report

Attention: Patch coverage is 98.73418% with 1 line in your changes missing coverage. Please review.

Project coverage is 92.64%. Comparing base (1e41ff2) to head (dab6e9a). Report is 1 commits behind head on develop.

Files Patch % Lines
openvino_xai/explainer/utils.py 88.88% 1 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## develop #56 +/- ## =========================================== + Coverage 92.43% 92.64% +0.21% =========================================== Files 20 22 +2 Lines 1308 1373 +65 =========================================== + Hits 1209 1272 +63 - Misses 99 101 +2 ``` | [Flag](https://app.codecov.io/gh/openvinotoolkit/openvino_xai/pull/56/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=openvinotoolkit) | Coverage Δ | | |---|---|---| | [dev-py310](https://app.codecov.io/gh/openvinotoolkit/openvino_xai/pull/56/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=openvinotoolkit) | `92.64% <98.73%> (+0.21%)` | :arrow_up: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=openvinotoolkit#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

GalyaZalesskaya commented 3 months ago

Thank you for your suggestions, Songki. I implemented __call__ for a single image and evaluate for a set of images. I also return Dict[str, float] for both of functions. It helps the generality but getting the score is no more elegant:

pointing_game_score = self.pointing_game.evaluate([explanation], self.gt_bboxes)["pointing_game"] pointing_game_score = list(self.pointing_game.evaluate([explanation], self.gt_bboxes).values())[0]

Provably, evaluate can affect the internal state of Metric class and we can get scores as property function pointing_game.score. Let's leave the optimization until the next PR.