Open CMCDragonkai opened 6 years ago
I decided not use this system for visualisation as it wasn't really understandable.
Instead I presented CNN visualisation paper a few weeks back in Sydney: https://www.meetup.com/Sydney-Paper-Club/events/pssrqpyxmbjb/
Deconvnets would need to be adapted to the resnet architecture as it was designed against the VGG16 architecture.
Hi I'm trying this visualisation library and ran on a simple MNIST network.
I'm comparing activation maximisation used here from the the one described here: https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html
The resulting filter visualisations are very different, but I expected that they should be similar in principle.
Here is what I get from using keras-vis:
Here's what I see when I use the one from the keras article:
I'm wondering why the images look so different? When I look at the source code, your library is far more complex than the example in the keras blog. But it seems to work in a similar way, using a random seed image, and then gradient ascent.
The model is very simple, and I'm only visualising the filters at conv2. The network was trained to: