sandylaker / saliency-metrics

An Open-source Framework for Benchmarking Explanation Methods in Computer Vision
https://saliency-metrics.readthedocs.io
MIT License
1 stars 3 forks source link

Implementing Insertion-Deletion #21

Open rjagtani opened 2 years ago

rjagtani commented 2 years ago

Short Description

Implementation of Insertion Deletion Metric for evaluating saliency maps Fixes #15 .

Long Description

Implementation of required classes and functions Added unit tests

Checklist

sandylaker commented 2 years ago

@rjagtani Please fix the lint issue first.

rjagtani commented 2 years ago

Thanks,I fixed the mypy issue, also added pytest.approx to compare floating point values because it was causing one of the tests to fail

sandylaker commented 2 years ago

@rjagtani In this iteration, I mainly reviewed the implementation. Please check the comments and solve the issues. After the implementation is corrected, I will review the unit tests. Please note that the expected test coverage is > 90%. You can check the coverage report in the CI logs, and see which lines are not covered by your tests.

sandylaker commented 2 years ago

@rjagtani Please mark the conversations as resolved after you make corresponding changes in your code.

rjagtani commented 2 years ago

@rjagtani Please mark the conversations as resolved after you make corresponding changes in your code.

I thought I'd do it once the changes are approved

rjagtani commented 2 years ago

I'm done implementing the suggested changes

sandylaker commented 2 years ago

I'm done implementing the suggested changes

Thank you for your effort. I will review the changes on the weekend.

sandylaker commented 2 years ago

@rjagtani The numbers in the test cases are too big, like 3072, 1024. Please note that the test cases are run on only a machine with 2GB memory. As more tests are written, they will be run in parallel, and the large tensors will result in a big memory footprint.

Also, you can squash your commits before pushing them. In this PR there are already 50+ commits and the commit list is very long.

rjagtani commented 2 years ago

@rjagtani The numbers in the test cases are too big, like 3072, 1024. Please note that the test cases are run on only a machine with 2GB memory. As more tests are written, they will be run in parallel, and the large tensors will result in a big memory footprint.

Also, you can squash your commits before pushing them. In this PR there are already 50+ commits and the commit list is very long.

Thanks for reviewing the code, I think all implementation related changes are done. I will continue with testing related changes tomorrow, I will also read up on squashing commits since I've never done it before.

sandylaker commented 2 years ago

@rjagtani Squash commits can be easily done in the GUIs of IDEs like PyCharm.

Even without squashing the commits, you do not have to push 1 change immediately after commit it. This is because CI in many GitHub projects are automated activated, so the tests will start to run right after you push someting. If you push another small change during the period when the previous tests are running, the previous tests will be canceled, and they will be re-run from the beginning. Therefore, the computation resources are wasted.

Instead, you can commit several times locally, and push them once.

sandylaker commented 2 years ago

@rjagtani One addtional note for you:

Please add an extra argument in the initialization function of the metric: device: Union[str, torch.device] = "cuda:0". Then create an attribute self.device = device. Then, send the self.classifier to self.device. Make sure you have freeze the whole classifier and turn on the eval mode in the initialization function.