Closed kirk86 closed 7 years ago
Thanks for the detailed description. I will get to the bottom of each one by the end of this weekend
I added notebook examples for MNIST in examples/ folder. Hopefully, that clarifies a lot of things. Also the API changed. visualize_xxx now has a regression and class variant separated out.
Take a look at that and let me know if that works for you.
@raghakot Hi, sorry for posting this here. Not related to keras-vis but was hopping to get some quick feedback. You have also a repo about keras-resnet
have you noticed any problems in terms of convergence. I've tried pretty much anything I can think of from reducing the learning rate to changing optimizers but nothing has worked so far. I can't make it reach above 46% accuracy. Did you had any of those issues?
Hmm. There is a Cigar training example in there that should converge pretty well. I wonder if something broke with latest keras. In either case, feel free to open an issue there and I will take a look at it
I haven't seen any correspondence related to this current issue. I am closing this assuming it's fixed. Feel free to reopen or create new issues if you see anything off.
Hi man, I just found that there are a number of weird results and errors occurring when using kerfs-vis. Let's direct them one by one.
Image requires to be in range [0-255] for cam, not sure though about saliency. Maybe you should handle this internally like when we use the deprocess function from vgg16.
weird results occur when using either method, whether its cam, saliency, dense layer visualization or convolutional layer.
Example:
When talking about convolution layer filter visualization your method gives output from filters in color while the model was trained on gray scale images for instance. why? Is that because it scales the images to [0-255]?
Provide some examples on the documents how to produce saliency maps and dense layer visualization for MLP.
If possible provide all the examples on standard dataset such as mnist & cifar. Easiest to test than huge models trained on imagenet.
The draw util on images is broken doesn't work.
Why there is such a discrepancy between saliency and cam
Why does cam return no result in some cases and works in some others. Is that an expected behavior?
How would you interpret those filters from dense and convolutional layers? I have no idea. To me they seem wrong?
These are what I get with my own method.
Somewhere in the
visualization.py
file it throws warning because there is division by zero. /Users/user/anaconda2/lib/python2.7/site-packages/keras_vis-0.3-py2.7.egg/vis/visualization.py:247: RuntimeWarning: invalid value encountered in dividegrads /= np.max(grads)
/Users/user/anaconda2/lib/python2.7/site-packages/matplotlib/colors.py:496: RuntimeWarning: invalid value encountered in less cbook._putmask(xa, xa < 0.0, -1)In the gist below I provide all the issues especially in the end in the comments. It would be nice if those things in the comments could be replicated as examples to show us how they work. None of those examples in the end worked for me.