vlfeat / matconvnet

MatConvNet: CNNs for MATLAB
Other
1.4k stars 756 forks source link

I wonder that can matconvnet concatenates two parallel layer outputs? #29

Closed dongb5 closed 8 years ago

dongb5 commented 9 years ago

Can matconvnet concatenates two parallel layer outputs like CONCAT layer in CAFFE? Suppose I have two images as input and to be convoluted at same time followed by a full connect layer to compare these two images. Can I build such a network with matconvnet? Thanks.

vedaldi commented 9 years ago

There isn’t such a block yet, but it would be easy to code a custom layer to do so. The easiest thing would be to concatenate two consecutive images in a batch, in which case I suspect the CONCAT operator would amount to reshaping the data.

We will be adding several new features in the upcoming months and we would like to get a sense of what people need, so please let us know what else is missing for you.

On 13 Dec 2014, at 17:59, dongb5 notifications@github.com wrote:

Can matconvnet concatenates two parallel layer outputs like CONCAT layer in CAFFE? Suppose I have two images as input and to be convoluted at same time followed by a full connect layer to compare these two images. Can I build such a network with matconvnet? Thanks.

— Reply to this email directly or view it on GitHub https://github.com/vlfeat/matconvnet/issues/29.

dongb5 commented 9 years ago

Thank you for reply!

nian-liu commented 9 years ago

@vedaldi I think it will be very good if matconvnet supports multiple input layers (for example, multi-scale architecture) and multiple out layers (for example, the Inception module in GoogLeNet). Additionally, weight sharing and more activation layers and loss layers (for example, logistic activation layer and euclidean loss layer) are needed as important trick and building blocks in CNNs. These new features will make matconvnet a powerful toolbox to implement CNNs.

swamiviv commented 9 years ago

Has there been any update regarding concatenation ?

Tgaaly commented 8 years ago

I would also like to know if there has been any update regarding this??

vedaldi commented 8 years ago

Hi, I think the answer is yes. The new DAG module supports GoogLeNet and you can now download a pre-trained implementation of it. Several new layers have been added as well, although I am sure we are still missing a few useful ones.

On 25 Sep 2015, at 17:10, Tarek El-Gaaly notifications@github.com wrote:

I would also like to know if there has been any update regarding this??

— Reply to this email directly or view it on GitHub https://github.com/vlfeat/matconvnet/issues/29#issuecomment-143263808.

jmontoyaz commented 8 years ago

@vedaldi thanks for releasing the new DAG module! I was wondering whether you could provide a working example (such as cnn_imagenet.m) on how you defined the network architecture of GoogLeNet (as in cnn_imagenet_init.m) and how did you trained it (as in cnn_imagenet.m)? Would be great :+1:

vedaldi commented 8 years ago

Hi, for the moment we did not try to learn GoogLeNet from scratch, but imported the model from Caffe ModelZoo. You should be able to use MatConvNet for fine-tuning without too many problems. Training from scratch should also work, but I think that it may take a bit of effort to figure out all the parameters.

On 15 Oct 2015, at 14:29, jmontoyaz notifications@github.com wrote:

@vedaldi https://github.com/vedaldi thanks for releasing the new DAG module! I was wondering whether you could provide a working example (such as cnn_imagenet.m) on how you defined the network architecture of GoogLeNet (as in cnn_imagenet_init.m) and how did you trained it (as in cnn_imagenet.m)? Would be great

— Reply to this email directly or view it on GitHub https://github.com/vlfeat/matconvnet/issues/29#issuecomment-148385968.

yuanyc06 commented 8 years ago

@vedaldi Hi Andrea, I'm new for MatConvNet. As you mentioned above, it is possible to use the pre-trained GoogLenet model for fine-tuning. However, as far as I know, there's no support for the inception module in vl_simplenn. So I wonder how should the fine-tuning with the pre-trained GoogLenet be done? Thank you very much!

vedaldi commented 8 years ago

The inception module is just a small network pattern. See e.g. here for a discussion:

https://github.com/BVLC/caffe/issues/1106

Hence, using DagNN, which can run such networks, you also get the ability to run inception.

On 13 Nov 2015, at 04:37, yuanyc06 notifications@github.com wrote:

@vedaldi https://github.com/vedaldi Hi Andrea, I'm new for MatConvNet. As you mentioned above, it is possible to use the pre-trained GoogLenet model for fine-tuning. However, as far as I know, there's no support for the inception module in vl_simplenn. So I wonder how should the fine-tuning with the pre-trained GoogLenet be done? Thank you very much!

— Reply to this email directly or view it on GitHub https://github.com/vlfeat/matconvnet/issues/29#issuecomment-156319615.

mosbate commented 8 years ago

I want to implement parallel layer in my network. In my network two images entered to two separated parallel conv layers and after convolving two results connect to fully connected network. Both input images have size 200-200 and one conv layer size is 3-3-50 and another is 5-5-50. I study image-net example but this is different structure. How can I design this network? Thanks.

lenck commented 8 years ago

Hmm, I think this issue can be closed as the concatenation layer is in DagNN for ages...