ethereon / netscope

Neural network visualizer
http://ethereon.github.io/netscope
944 stars 322 forks source link

Display layer shapes #10

Closed rlnx closed 7 years ago

rlnx commented 7 years ago

I've implemented displaying data shapes propagated through the network on each layer, this feature was requested by @sunsided in #7. Here is an example with ResNet. @ethereon, please, review my changes.

RSly commented 7 years ago

Hi @RuslanIsrafilov and @ethereon

I checked the above feature! it is a really nice feature and very helpful.

There is a little bug for the deconvolution layer. the output size of a deconv layer is calculated as following: https://github.com/BVLC/caffe/blob/master/src/caffe/layers/deconv_layer.cpp#L7-L22

can you take a look please? see upscore2 should become 12x12!!

image

rlnx commented 7 years ago

@RSly, that's not a bug, I just haven't implemented output sizes inference for the deconvolution layer. Have a look at layers.coffee, each layer is wrapped into the class. It's easy to implement by adding new class DeconvolutionLayer with the method inferShapes.

RSly commented 7 years ago

@RuslanIsrafilov Thanks, I could easily add deconv, using your convolution template!

I think it would be great if you could add couple of these useful layers in your commit: deconv, Crop, reshape, flatten... without them, the examples from FCNs are wrong.

p.s. here is the complete list for reference: http://caffe.berkeleyvision.org/tutorial/layers.html

as a quick solution, I added these lines: layers.Crop = class @CropLayer inferShapes: (bottoms, tops) => unless tops?[0]? then return tops[0].shape = bottoms[1].shape

layers.Deconvolution = class @DeconvolutionLayer constructor: (attribs) -> params = attribs?.convolution_param unless params? throw 'Deconvolution layer must have convolution_param.' @filters = params.num_output @padding = extractPaddingSizes params @stride = extractStrideSizes params @kernel = extractKernelSizes params @dilation = getValueOrDefault params.dilation, 1 @axis = getValueOrDefault params.axis, 1

inferShapes: (bottoms, tops) =>
    unless tops?[0]? then return
    @checkParameters bottoms, tops
    # Deconvolution layer behaviour is alligned with Caffe
    # The layer processes each bottom -> top pair independently
    for i in [0...tops.length]
        @inferShapesForOneBlob bottoms[i], tops[i]

inferShapesForOneBlob: (bottom, top) =>
    inputShape = bottom.shape
    outputShape = inputShape[..]
    outputShape[@axis] = @filters
    succeedingDimensions = inputShape[@axis + 1..]
    sucDimLength = succeedingDimensions.length
    padding  = getParameterAsArray @padding,  sucDimLength, 'padding'
    kernel   = getParameterAsArray @kernel,   sucDimLength, 'kernel'
    stride   = getParameterAsArray @stride,   sucDimLength, 'stride'
    dilation = getParameterAsArray @dilation, sucDimLength, 'dilation'
    for i in [@axis + 1...inputShape.length]
        ii = i - @axis - 1
        kernelExtent = dilation[ii] * (kernel[ii] - 1) + 1;
        outDim = stride[ii] * (inputShape[i] - 1) + kernelExtent - 2 * padding[ii]
        outputShape[i] = Math.floor outDim
    top.shape = outputShape

checkParameters: (bottoms, tops) =>
    unless @filters?
        throw 'Deconvolution layer must have num_output parameter.'
    if not @kernel? and (not @kernel[0]? or not @kernel[1]?)
        console.log @kernel
        throw 'Deconvolution kernel sizes must be set.'
    unless bottoms?
        throw 'Deconvolution layer received undefined bottom blobs.'
    if bottoms.length != tops.length
        throw "Deconvolution layer can process number of top blobs which is equal to " +
              "the number of bottom blobs, but received #{tops.length} top blobs and " +
              "#{bottoms.length} bottom blobs."
rlnx commented 7 years ago

@RSly, thanks, I've added support for Deconvolution and Crop layers. It seems FCN example works correct now https://ruslanisrafilov.github.io/netscope/#/preset/fcn-8s-pascal.

ethereon commented 7 years ago

Hey @RuslanIsrafilov, excellent work! While I haven't had the opportunity to review it closely, it looks great.

The one concern that I do have involves unsupported layer types. This implementation treats these as pass-through in terms of their output shapes. This can lead to displaying misleading dimensions. I'd much rather prefer nothing be displayed when the layer type is unknown rather than potentially displaying the incorrect shape.

(Apologies for the late response; haven't had much free time lately).

rlnx commented 7 years ago

@ethereon, I've fixed this behaviour and added warning message if the layer type is unknown and data shape can't be inferred (example). Currently it should correctly process shapes up to the unknown layer and does not display them on the following layers.

rlnx commented 7 years ago

@ethereon, is there any possibility you would review/merge my PR? :)

ethereon commented 7 years ago

@RuslanIsrafilov Looks great, thanks!