Closed GalyaZalesskaya closed 3 months ago
Attention: Patch coverage is 98.73418%
with 1 line
in your changes missing coverage. Please review.
Project coverage is 92.64%. Comparing base (
1e41ff2
) to head (dab6e9a
). Report is 1 commits behind head on develop.
Files | Patch % | Lines |
---|---|---|
openvino_xai/explainer/utils.py | 88.88% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Thank you for your suggestions, Songki. I implemented __call__
for a single image and evaluate
for a set of images. I also return Dict[str, float]
for both of functions. It helps the generality but getting the score is no more elegant:
pointing_game_score = self.pointing_game.evaluate([explanation], self.gt_bboxes)["pointing_game"]
pointing_game_score = list(self.pointing_game.evaluate([explanation], self.gt_bboxes).values())[0]
Provably, evaluate
can affect the internal state of Metric
class and we can get scores as property function pointing_game.score
. Let's leave the optimization until the next PR.
Added Insertion Deletion AUC metric that calculates the accuracy drop/increase during deletion/insertion of important pixels
Added parent BasicMetric