Closed stes closed 7 years ago
I've implemented unsupervised autoencoders with Keras, you just have to use input as desired or something like that. For noise things like variational autoencoders I wrote my own regularizers to add the noise. There is also one person saying he will work on generative adversarial networks.
Beyond that I think there's been a lot of interest in unsupervised learning.
I also wrote some examples of sparse coding in Seya. Yeah, that all that I heard about unsup around here.
This issue is a little old but I'll add my support for this. An alternative to the OPs suggestion would be the ability to pass an expression in as an initial parameter value. Thus, one could pass W from one Dense layer as W.T when initializing another. Instead of adding a new parameter that gets updated via the gradient during learning, the gradient would continue to flow backward through the expression to the original parameter(s).
I've also ended up with something similar to your TiedDense. The above would make a separate class unnecessary.
I know that there is the AutoEncoder layer, however important concepts such as DBNs + RBMs are not implemented. Is keras intended to be a framework for purely supervised learning methods? If not, I could propose working on integrating RBMs, Denoising Autoencoders with tied weights etc. into keras.
I would propose to implement 'transpose' operations for each layer, such that a layer can be used "in both directions", i.e. in a manner similar to
or
Another option would be to define special layers such as a TiedDense (that's how it is currently called in my code), subclassing the Dense layer class like
The obvious downside of this solution is that one would have to implement such a class separately for every layer type.
Any opinions on this?