vlfeat / matconvnet

MatConvNet: CNNs for MATLAB
Other
1.4k stars 753 forks source link

Multi Task Learning #11

Closed dasguptar closed 9 years ago

dasguptar commented 10 years ago

Hi, How would it be possible to implement a multi task neural network using MatConvNet? Basically, how can I split the output of the final shared layer and send it to multiple loss layers, and combine the errors for backpropagation accordingly, using MatConvNet?

vedaldi commented 10 years ago

Yes, MatConvNet is designed to be flexible. This is definitely possible by recombining the core computational blocks, and it would boil down to:

Writing such scripts would be more or less difficult depending on how general your solution should be. We will likely add some of this functionality ourselves in a future release.

On 22 Oct 2014, at 07:07, Riddhiman Dasgupta notifications@github.com wrote:

Hi, How would it be possible to implement a multi task neural network using MatConvNet? Basically, how can I split the output of the final shared layer and send it to multiple loss layers, and combine the errors for backpropagation accordingly, using MatConvNet?

— Reply to this email directly or view it on GitHub.

siskander commented 8 years ago

On October 22, 2014, you received a message from someone
https://github.com/vlfeat/matconvnet/issues/11

"Hi, How would it be possible to implement a multi task neural network using MatConvNet? Basically, how can I split the output of the final shared layer and send it to multiple loss layers, and combine the errors for backpropagation accordingly, using MatConvNet?"

and you replied "Yes, MatConvNet is designed to be flexible. This is definitely possible by recombining the core computational blocks, and it would boil down to: * modifying vl_simplenn.m (or write your own alternative driver script) to take more complex network topologies and * modifying the example training code cnn_train.m (or write your own) accordingly to mix the multiple tasks Writing such scripts would be more or less difficult depending on how general your solution should be. We will likely add some of this functionality ourselves in a future release."

I have the same problem, has MatConvNet been changed? or still I need to modify its m files?

In case it is changed, would you please tell me how to construct more topologies ?

dasguptar commented 8 years ago

Hi @siskander you can take a look at DAGs in MatConvNet. Currently, DAGs support any directed acyclic graph architecture, and can have multiple inputs and multiple outputs, which is what you will primarily need for multi-task learning. :smile:

siskander commented 8 years ago

Hello:

Thank you for your reply. I have read tutorials about DAGNN.

I am implementing multi-label problem for 64 attributes.

So, I designed a CNN has 64 SoftMAx followed by loss function:

1) during epoches, I found val is appeared in all batches, there is no name 'train' 2) How to divide the inputs and labels for each loss layer? 3) How can I get the accuracy for each one?

Thank you.

siskander commented 8 years ago

4) how to get the loss value for each output? 5) Is it necessary to implement a layer to collect the gradient in backpropagation?

Thank you.

siskander commented 8 years ago

OR

In the documentation of loss.m, there is an option for c to be multi-label:

c = HxWxDxN

but, in my case HxW is 1x1

c = DxN = 64 attributes x number of instances/batch

This format generates error, would you please help me on this?

Nepths commented 7 years ago

Hi @siskander @sungsooo
I have the same prolem you had, have you solved it? Can you explain how to solve it or share an example for multi-task learning? Thank you.