Closed elboyran closed 4 years ago
Saving the pre-trained pattern net and pattern attribution saves a lot of time after the first training. The others are created on the go. Still out of 14 methods only 9 actually work!
Notebook: Single Image Many Methods
The top 3 methods are: Deep Taylor PatternAttribution and PatternNet
iNNvestigate explainability software
[x] read iNNvestigate paper
[x] install iNNvestigate
[x] study and run the Intro notebook
[x] Study (and run) the example notebook on MNIST and different classes on MNIST notebook
FIxes
[x] Fix Model1 with explicit unique layer names and retrain
[x] Fix Model2 with explicit unique layer names and retrain
[x] Fix Model3 with explicit unique layer names and retrain
[x] Fix Model4 with explicit unique layer names and retrain
Compare all explainability methods in iNNvestigate. Starting point - the MNIST notebooks above
[x] Create a heatmap of Model1 on a single test image with a single analyzer (gradient * input) (use Intro Notebook)
[x] Create a heatmap of Model1 on many test images with a single analyzer (gradient * input) (use Intro Notebook)
[x] Create heatmaps of Model1 on a single test image with many analyzers and in respect to all classes (use MNIST select neuron Notebook) and decide on the best (1 up to 3) methods
[x] Create heatmaps of Model1 on multiple test images with many analyzers and in respect to all classes (use MNIST select neuron Notebook) using the top (1 up to 3) methods