chr5tphr / zennit

Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Other
183 stars 33 forks source link

Feature Request: Debugger #187

Open rachtibat opened 1 year ago

rachtibat commented 1 year ago

Hey,

I think it would be really nice, if we could check if the composite actually attached to all the modules we named and to see if there are any modules left that have no rules attached to. In the long run - but much more difficult to realize - it would be nice to see, if there are any non-Module elements that unexpectedly changed the Relevance values.

Best

rachtibat commented 1 year ago

Regarding the second feature "detecting unexpectedly changed relevance values", this could be implemented by introducing a new rule/hook that

  1. returns fixed relevance values, e.g. torch.ones_like(input.shape) * 42.
  2. and then checks at the input whether the gradient_inputs == our fixed values.

Best

chr5tphr commented 1 year ago

Hey Reduan,

yes, I think checking whether the composite assigned the rules correctly is a nice feature to have, as this is also done inside the tests. One sanity check that I also would like to introduce is #147, which is somewhat related.

Checking whether the gradient was modified in between rules may be a little challenging for anything that is not feed-forward, as the gradients may not be passed in the same order as they are defined within the model, but there might be a an elegant way to do it.