vlfeat / autonn

A fast and expressive Matlab/MatConvNet deep learning API, with native automatic differentiation
89 stars 35 forks source link

Does it have fully connected layer? #49

Closed damdaepark closed 5 years ago

damdaepark commented 5 years ago

I found that some of commonly used layers including convolutional layer(vl_nnconv), batch normalization layer(vl_nnbnorm), and activation layer(vl_nnrelu) are defined in separate functions.

But wonder why it doesn't support fully connected layer in the same manner, even though we can define it by using Param or else.

jotaf98 commented 5 years ago

Hi, I took the same view from matconvnet that fully-connected parts of the network are just like convolutional ones, except the spatial dimensions are 1x1.

In the C++/CUDA code all the appropriate optimizations are made in those cases, so there's no performance penalty. This allows keeping the API a bit cleaner since there are no duplicated functions just for different numbers of dimensions.

You can see an example of this at work, here the "network" (which has only one linear layer) is actually fully-connected in the way you describe. The only difference from a CNN is that the input is reshaped to size = [1, 1, features, samples] (line 10) and the spatial dimensions of the filters are 1x1 (line 22). https://github.com/vlfeat/autonn/blob/master/examples/minimal/minimal_network.m