Open yorkerlin opened 9 years ago
@karlnapf
@lisitsyn
@iglesias
any suggestion?
I have implemented stochastic gradient descend (SGD), stochastic variance reduced gradient (SVRG), and stochastic mirror descend (SMD) minimizer. Next step is to implement stochastic coordinate descend (SCD).
For example:
For L1 (lasso) problem,
n > d
case d > n
casewhere n
is the number of sample points, d
is the number of features
Next step:
@yorkerlin hi, it might be a good idea to talk to John if you are doing VW related work. We always wanted to do another round of integrating Shogun and VW closer to each other. Send me an email if you want me to introduce you
@karlnapf Yes. Please check out the email.
The stochastic
minimizer
framework (http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1FirstOrderSAGCostFunction.html http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1FirstOrderStochasticCostFunction.html http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1FirstOrderStochasticMinimizer.html) and thelossFunction
framework for Vowpal Wabbit (http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLossFunction.html) can be used to implement Shogun's online linear classifiers or thedeep learning
modular. (eg, http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html)For example, we can extend the
minimizer
framework to supportsparse
vectors,GPU
vectors and maybe parallel/asynchronous update.BTW, the
lossFunction
is a subclass ofSGObject
. Data members in thelossFunction
arenot
mutable variables. No idea why it is a subclass ofSGObject
? @iglesias I think all subclass oflossFunction
areelements-wise
cost functions.My goal is to implement the
deep GP
ordeep probabilistic models
in thedeep learning
modular once my GP work is done.