lmjohns3 / theanets

Neural network toolkit for Python
http://theanets.rtfd.org
MIT License
328 stars 73 forks source link

More explicit documentation on the decoding layer #22

Closed kudkudak closed 9 years ago

kudkudak commented 10 years ago

Hello,

I think that it is a little bit counterintuitive that the last layer is linear, while the activation is chosen by the user. The user should be able to pick the decoding activation (and in my opinion it should be defaulted to "activation" option)

lmjohns3 commented 10 years ago

Sorry for the late reply here! I agree that it would be useful to have a user-configurable output activation. Until now I relied on subclassing to solve that problem (see https://github.com/lmjohns3/theano-nets/blob/master/theanets/feedforward.py#L485 for an example of making a softmax output layer), but I just checked in a change that will allow the output activation to be specified as a constructor argument.

Because I live mostly in autoencoder land, I will keep the default output activation as "linear" -- much more widely applicable when doing autoencoders or any other sort of regression network.

I will close this issue when I add the actual feature.

lmjohns3 commented 9 years ago

I think this has finally been addressed. The documentation needs a lot of work, which I am going to try to get to in the next couple of weeks. Going to go ahead and close this issue though.

kudkudak commented 9 years ago

Thanks!