lmjohns3 / theanets

Neural network toolkit for Python
http://theanets.rtfd.org
MIT License
328 stars 73 forks source link

create a model for jointly optimizing multiple models #88

Closed lmjohns3 closed 9 years ago

lmjohns3 commented 9 years ago

It would be neat if we could create two models (e.g. and "encoder" and a "decoder" like http://arxiv.org/pdf/1406.1078.pdf) and then stuff them into another "meta-model" that would jointly optimize the parameters of each sub-model together. Maybe hook together the losses, or just add the parameters of the first model to the loss for the second?

nkundiushuti commented 9 years ago

*edit: sorry, I didn't notice you already suggested this on a thread on the google groups would it be better to do this having another layer class, SplitLayer? each SplitLayer would split the inputs and/or outputs in 2 or many parts and the costs and weights are updated relative to the parts ex: you have 6 inputs and 12 outputs and you split them in half: first 3 inputs would connect only with the first 6 outputs. in this way you can have a network which has two separate networks inside or whatever combination of small networks inside it which join into larger networks. I have done something similar implementing a recurrent regressor for source separation (input=1024, output =2048), basically the output contains the two separated sources. the original paper is this: http://www.ifp.illinois.edu/~huang146/papers/DRNN_ISMIR2014.pdf

lmjohns3 commented 9 years ago

In the theanets master branch is some code for implementing multiple losses within a single model. I think this will address the current need (and others!).

I still need to write some documentation for the feature, so I'll close this when that's finished.

lmjohns3 commented 9 years ago

I've added some documentation at http://theanets.rtfd.org/en/latest/api/losses.html

zwang4 commented 8 years ago

Any progress on the encoder-decoder model?