JuliaML / ObjectiveFunctions.jl

Generic definitions of objective functions using abstractions from LearnBase
Other
5 stars 3 forks source link

Implement Sparse Coding #2

Open tbreloff opened 8 years ago

tbreloff commented 8 years ago

Sparse Coding is a framework of finding basis vectors and weights to decompose an input vector into the linear combination of a subset of basis vectors. The framework is very similar to Empirical Risk Minimization in that there's a loss and a penalty, except that the penalty is on the output, not the parameters. I believe sparse coding refers specifically to a linear model, but of course any combinations of transformation/loss/penalty could be used... we just need to add a penalty on the output. Should we just go ahead and add this to RegularizedObjective? It can default to NoPenalty.

ref: http://ufldl.stanford.edu/wiki/index.php/Sparse_Coding http://gabgoh.github.io/SARG/

cc: @gabgoh

tbreloff commented 8 years ago

Rethinking this a bit, I don't think this is correct:

The framework is very similar to Empirical Risk Minimization in that there's a loss and a penalty, except that the penalty is on the output, not the parameters

If you consider the input x as being approximated by the transformation y = w*a, then we want to ensure sparsity on a, not y. Couple this with the concept that both w and a are "parameters" to be learned, and you have something very different from ERM. I think it requires a separate type/implementation.

gabgoh commented 8 years ago

I think there are two separate problems to consider, the Dictionary Learning problem (learning w and a) and Sparse coding (just learning a with w fixed). The latter, is also known as sparse regression, which could be ERM? (I'm not familiar with ERM, but it sounds like a generalized regression).