Open enryH opened 4 years ago
Dear team, i finally got some time to look into your latest version of iNNvestigate. Thank you very much for providing the package!
I want to use DTD and I wonder how the current implementation currently works exactly:
Do you choose according to the layer-type or activation type different root points corresponding to decomposition rules?
In your old implementation you wanted to change the basic DTD setup, see here. Might it be the changed got lost?
Based on your introduction notebook I started to check the conversation property. Full code including your utiliy functions is here.
-i Test image is a 9
# Create analyzer analyzer = innvestigate.create_analyzer("deep_taylor", model_wo_sm) # Applying the analyzer analysis = analyzer.analyze(image) # Check Conservation scores = model_wo_sm.predict(image) print("Maximum Score: {:.3f} with label {}".format(scores.max(), scores.argmax())) print("sum of relevances assigned to inputs: {:.3f}".format(analysis.sum())) try: assert abs(scores.max() - analysis.sum()) < 0.001 except AssertionError: print("not equal...") # Biases are included and conversation property in DTD framework fails
Maximum Score: 11.711 with label 9 sum of relevances assigned to inputs: 11.561 not equal...
# LRP-Alpha_1-Beta_0 without biases is z+ rule in DTD paper from innvestigate.analyzer.relevance_based.relevance_analyzer import LRPAlpha1Beta0IgnoreBias analyzer = LRPAlpha1Beta0IgnoreBias(model_wo_sm) # Applying the analyzer analysis = analyzer.analyze(image) # Check Conservation scores = model_wo_sm.predict(image) print("Maximum Score: {:.3f} with label {}".format(scores.max(), scores.argmax())) print("sum of relevances assigned to inputs: {:3f}".format(analysis.sum())) assert abs(scores.max() - analysis.sum()) < 0.001
Maximum Score: 11.711 with label 9 sum of relevances assigned to inputs: 11.711
# Create analyzer analyzer = innvestigate.create_analyzer("deep_taylor", model_wo_sm)
analysis = analyzer.analyze(image)
scores = model_wo_sm.predict(image) print("Maximum Score: {:.3f} with label {}".format(scores.max(), scores.argmax())) print("sum of relevances assigned to inputs: {:.3f}".format(analysis.sum())) try: assert abs(scores.max() - analysis.sum()) < 0.001 except AssertionError: print("not equal...")
Maximum Score: 12.835 with label 9 sum of relevances assigned to inputs: 13.338 not equal...
```python # LRP-Alpha_1-Beta_0 without biases is z+ rule in DTD paper from innvestigate.analyzer.relevance_based.relevance_analyzer import LRPAlpha1Beta0IgnoreBias analyzer = LRPAlpha1Beta0IgnoreBias(model_wo_sm) # Applying the analyzer analysis = analyzer.analyze(image) # Check Conservation scores = model_wo_sm.predict(image) print("Maximum Score: {:.3f} with label {}".format(scores.max(), scores.argmax())) print("sum of relevances assigned to inputs: {:3f}".format(analysis.sum())) assert abs(scores.max() - analysis.sum()) < 0.001
Maximum Score: 12.835 with label 9 sum of relevances assigned to inputs: 12.835
Dear team, i finally got some time to look into your latest version of iNNvestigate. Thank you very much for providing the package!
I want to use DTD and I wonder how the current implementation currently works exactly:
Do you choose according to the layer-type or activation type different root points corresponding to decomposition rules?
In your old implementation you wanted to change the basic DTD setup, see here. Might it be the changed got lost?
Based on your introduction notebook I started to check the conversation property. Full code including your utiliy functions is here.
-i Test image is a 9
Having no bias constraint
Constrainig the bias in ReLUs to be negative as in DTD paper
Applying the analyzer
analysis = analyzer.analyze(image)
Check Conservation
scores = model_wo_sm.predict(image) print("Maximum Score: {:.3f} with label {}".format(scores.max(), scores.argmax())) print("sum of relevances assigned to inputs: {:.3f}".format(analysis.sum())) try: assert abs(scores.max() - analysis.sum()) < 0.001 except AssertionError: print("not equal...")
Biases are included and conversation property in DTD framework fails
Maximum Score: 12.835 with label 9 sum of relevances assigned to inputs: 13.338 not equal...