Closed albermax closed 5 years ago
I have updated the example all_methods.py. The code to choose networks and load patterns is simplified. Currently it only works vgg16. On the weekend I will upload fake patterns and it should work with all networks.
Please drop A for now.
I just saw the notebooks. Looks good!
Did you run the code on a cpu or on gpu?
On a cpu.
Should we move the notebooks to separate folder (inside experiments) in which the jupyter server can be run? I think it would look clearer and more structured. Or do you want to get rid of the scripts that have an ipynb counterpart anyway?
I think it's a good a idea to separate them! And no, I would like to keep the py files for the reason, when one works with ssh on a cluster one cannot always run jupyter.
Done, see #49
Thanks I will have a look!
Hi Max,
should the plotting for nets vs methods for single images also be transformed into a notebook. It's currently only available as python script.
If it wouldn't be too much effort for now that would be great! Though other things are more important.
Cheers
Please use for MNIST a input-range different from [0, 1]. That one is missleading. Thanks!
Why is that? I also tried [-0.5, 0.5] but almost all explanations look worse then.
Because input x gradient (or relevance etc) is 0 whenever the input is 0 :-) Thus basically they don't explain much. It looks not so well because the network is not "well" trained, I guess the features extraction is not the best. Sebastian has some networks that are trained in a better way. But ok, lets keep it like it is for the eye's sake and when we have more time we can improve it.
define "worse" ?
can you maybe add a network class selector similar as in mnist_lrp.py?
We use (ipy)notebooks as main way to introduce people to the api. "Learning by doing".
The following notebooks and contents should be part of the first release:
A) [core reversal code:] walk through the idea of inverting the graph, show how to implement the gradient by inverting each layer, then a slightly more advanced example by showing how to create deconvnet and guided backprop.
B) [mnist complete workflow:] the same as the mnist example right now, just with more comments on how things work. And the code should be structured more nicely, i.e, not using those function, more in a step by step fashion.
C) [imagenet application] a step-by-step example with imagenet, again similar to the all_methods.py. The scope here is to reuse the patterns and let the people know how the can use our applications submodule.
D) [imagenet network comparison] Same as the C only the outcome should be a square with n networks as rows and n methods as columns. The code can and will be a bit more messy as different networks need different preprocessing functions and image sizes. Overall it should be doable as the key information is already present in the dictionary returned by innvestigate.applications.
E) [imagent pattern training] similar to the current train_***.py. Focus on how to train patterns for a large network. In the best case this example shows how to train with several gpus (means setting the parameter gpus=X).
D) [perturbation mnist] an example notebook on how to use perturbation analysis with mnist that everybody can run.
E) [perturbation imagenet] an example notebook on how to use perturbation analysis with imagenet.
F) [LRP/DT intro] Sebastian might wants to add an LRP/DT notebook.
I think if there exists first a working python-script it is easy to create the notebook. The advantage would be one can run the examples easily via ssh. The drawback is the code is doubled and when changed one needs to change two places.