Closed tengerye closed 5 years ago
Hey, I was actually planning on implementing an example of deconvnets. Do you have paper suggestion?
Thank you @utkuozbulak for your kine reply. There are two papers you can't miss.
The first is "Zeiler, Matthew D., et al. "Deconvolutional networks." Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010." and the second is "Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014."
I find tensorflow codes may also be useful.
A very late update: Having re-read some of the visualization work, I think I will not implement deconvnets as it is very similar to backprop/guided backprop in terms of what it produces and already out of date compared to both. You can read the details from:
As we show below, DeconvNet-based reconstruction of the nth layer input Xn is either equivalent or similar to computing the gradient of the visualised neuron activity f with respect to Xn, so DeconvNet effectively corresponds to the gradient back-propagation through a ConvNet.
from: K. Simonyan, A. Vedaldi, A. Zisserman. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, https://arxiv.org/abs/1312.6034
However, what I want to do is implement a layer visualization for a specific input image (unlike the layer vis. that is already in the repo which tries to maximize mean activation). I want to create visualizations for target layers similar to Figure 3 in J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The All Convolutional Net, https://arxiv.org/abs/1412.6806.
Dear Sir, if possible, would you please implement the deconvolutional networks? I think that one is very popular as well.