Closed ahwillia closed 8 years ago
I was just working on the last checkbox, and had the idea that maybe there should be a StateUpdater
to go along with the ParamUpdater
. So the type def would be:
immutable GradientLearner{LR <: LearningRate, PU <: ParamUpdater, SU <: StateUpdater} <: LearningStrategy
lr::LR
pu::PU
su::SU
end
and then for the type of learning I was doing, we could have a type BackpropUpdater <: StateUpdater
, which simply does a forward and backward pass for an observation. We could also have a NoUpdater
which does nothing.
Thoughts? @ahwillia do you think this will accommodate what you were thinking about?
Discussion will continue on this PR - https://github.com/JuliaML/StochasticOptimization.jl/pull/6
@tbreloff and I had a long chat on the JuliaML gitter. We decided on the following courses of action:
LearnBase
and have no other dependenciesGradientDescent <: LearningStrategy
should be renamed since it doesn't necessarily follow the gradient exactly (unless it uses parameter updaterSGD
)GradientDescent
needs to be re-written so that it does not depend on the Transformations APIThe reasoning behind all of this is so that anyone can access and the parameter updaters without having to buy into the full JuliaML ecosystem.