This code is no longer maintained. The codebase has been moved to https://github.com/scikit-learn-contrib/skglm. This repository only serves to reproduce the results of the AISTATS 2021 paper "Anderson acceleration of coordinate descent" by Quentin Bertrand and Mathurin Massias.
Currently we have a Penalty mother class, which L1, L1_plus_L2, and any new penalty inherit. (in penalties.py)
For each type of penalty, we have a corresponding sklearn estimator (Lasso, Enet) (in `estimators.py)
I don't see this scaling well when more penalties and more datafits are added. In my opinion we should only have a single Estimator class, which is instanciated a Penalty and a Datafit
This second design will cause our estimator to not be compatible with GridSearchCV : the parameters of the penalty are attributes of estimator.penalty, not of estimator.
Currently we have a Penalty mother class, which L1, L1_plus_L2, and any new penalty inherit. (in
penalties.py
) For each type of penalty, we have a corresponding sklearn estimator (Lasso, Enet) (in `estimators.py)I don't see this scaling well when more penalties and more datafits are added. In my opinion we should only have a single
Estimator
class, which is instanciated aPenalty
and aDatafit
This second design will cause our estimator to not be compatible with
GridSearchCV
: the parameters of the penalty are attributes ofestimator.penalty
, not ofestimator
.Do you see a solution @agramfort ?