JuliaML / StochasticOptimization.jl

Implementations of stochastic optimization algorithms and solvers
Other
30 stars 10 forks source link

Make this package work for more generic models #4

Closed ahwillia closed 8 years ago

ahwillia commented 8 years ago

@tbreloff and I had a long chat on the JuliaML gitter. We decided on the following courses of action:

The reasoning behind all of this is so that anyone can access and the parameter updaters without having to buy into the full JuliaML ecosystem.

tbreloff commented 8 years ago

I was just working on the last checkbox, and had the idea that maybe there should be a StateUpdater to go along with the ParamUpdater. So the type def would be:

immutable GradientLearner{LR <: LearningRate, PU <: ParamUpdater, SU <: StateUpdater} <: LearningStrategy
    lr::LR
    pu::PU
    su::SU
end

and then for the type of learning I was doing, we could have a type BackpropUpdater <: StateUpdater, which simply does a forward and backward pass for an observation. We could also have a NoUpdater which does nothing.

Thoughts? @ahwillia do you think this will accommodate what you were thinking about?

ahwillia commented 8 years ago

Discussion will continue on this PR - https://github.com/JuliaML/StochasticOptimization.jl/pull/6