Open eveningdong opened 7 years ago
@NanqingD The last three layers of FCNs are convolution layer.
@yeshenlin Hi, I made a pull request, you should understand what I am saying after you see it. As Yann LeCun says, all FC layers are convolution layers. Here should be a bug if you do some dimension test on it.
The padding 'SAME' is desired to get a whole Image output after FCN upsampling. Yes, the output of FC6 is not equal the output of a VGG classification network. This is however by design the goal is to perform segmentation not to show howto perform classification conv-layers.
The output of our network is equal to a VGG network convoluted in sliding window fashion over the input. The center pixel of our [batch_size, 7, 7, 4096]
grid is equivalent to the FC layer in classification VGG.
Hi, I was directed to here by
if name == 'fc6': filt = self.get_fc_weight_reshape(name, [7, 7, 512, 4096]) elif name == 'score_fr': name = 'fc8' # Name of score_fr layer in VGG Model filt = self.get_fc_weight_reshape(name, [1, 1, 4096, 1000], num_classes=num_classes) else: filt = self.get_fc_weight_reshape(name, [1, 1, 4096, 4096]) conv = tf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')
My question is on 'fc6' layer, assume at that layer, the input, which is the
bottom
here has shape[batch_size, 7, 7, 512]
, the weight matrix, which isfilter
here has shape[7, 7, 512, 4096]
, so aftertf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')
, the output,conv
here should have shape[batch_size, 7, 7, 4096]
, even it is a 1 x 1 convolution, but it is not a fully connected layer.
Can you please explain how the output will have [batch_size, 7, 7, 4096] and not [batch_size, 1, 1, 4096] because if the input is `[batch_size, 7, 7, 512] and filter is [7, 7, 512, 4096] then shouldn't I just get a pixel value for every "[7, 7, 512]" and this will go on for 4096 times. My understanding might be wrong but can you please calrify?
Hi, I was directed to here by
https://datascience.stackexchange.com/questions/12830/how-are-1x1-convolutions-the-same-as-a-fully-connected-layer
My question is on 'fc6' layer, assume at that layer, the input, which is the
bottom
here has shape[batch_size, 7, 7, 512]
, the weight matrix, which isfilter
here has shape[7, 7, 512, 4096]
, so aftertf.nn.conv2d(bottom, filt, [1, 1, 1, 1], padding='SAME')
, the output,conv
here should have shape[batch_size, 7, 7, 4096]
, even it is a 1 x 1 convolution, but it is not a fully connected layer.