Closed lvmeng8 closed 6 years ago
Hi @lvmeng8 ! Thank you for your question. The gradient for convolution is same as transpose - which in the paper is referred to as deconvolution operation. For description, see: https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose which says
This operation is sometimes called "deconvolution" after Deconvolutional Networks, but is actually the transpose (gradient) of conv2d rather than an actual deconvolution. We have defined custom gradients for LRN and Relu - if you look at https://github.com/InFoCusp/tf_cnnvis/blob/master/tf_cnnvis/tf_cnnvis.py lines 28-45 which do the right things
Also, for max pooling, the gradient does the unpooling operation: https://github.com/tensorflow/tensorflow/issues/2169
Thanks. I think the section 4 in "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps" may help us to understand the relationship of gradient and deconvolution, rectify, unpool.
Hi, thinks for your pretty work. I read the code of this project, but I have a problem. Can you give me an explain? In paper 'Visualizing and Understanding Convolutional Networks', an approach based on deconvnet, unpooling, recrification are introduced. In 'Readme' you see you infer 'Visualizing and Understanding Convolutional Networks', but I can not find any code about deconvnet, unpooling, recrification operations. I find the output is just the gradient about layer-out tensor and X in function '_deconvolution' in tf_cnnvis.py. Is this relevant to deconvnet, unpooling, recrification introduced in 'Visualizing and Understanding Convolutional Networks'?