glm-tools / pyglmnet

Python implementation of elastic-net regularized generalized linear models
http://glm-tools.github.io/pyglmnet/
MIT License
279 stars 84 forks source link

The pyglmnet solvers give different answers and neither solver agrees with sklearn #395

Open idc9 opened 3 years ago

idc9 commented 3 years ago

The pyglment package gives different estimated coefficients for linear regression depending on which solver you select (e.g. cdfast or batch-gradient). Neither solver agrees with sklearn (I believe they should agree!) The code below gives an example for both ElasticNet and vanilla Lasso.

Reproducible examples

from sklearn.linear_model import ElasticNet, Lasso
from sklearn.datasets import make_regression
import numpy as np

from pyglmnet.pyglmnet import GLM

# sample some linear regression data
X, y, coef_ = make_regression(
    n_samples=200, n_features=20,
    noise=1, n_informative=10, coef=True,
    random_state=1)

ElasticNet

First let's check elastic net

alpha = 0.9
l1_ratio = .5

kws = {'distr': 'gaussian',
       'alpha': l1_ratio,
       'reg_lambda': alpha
      }

# solve with coordinate descent
enet_cd = GLM(solver='cdfast', **kws)
enet_cd.fit(X, y)

# solve with gradient descent
enet_grad = GLM(solver='batch-gradient', **kws)
enet_grad.fit(X, y)

print('pyglmnet gradient vs cd', np.linalg.norm(enet_grad.beta_ - enet_cd.beta_))

# solve with sklearn
enet_sklearn = ElasticNet(alpha=alpha, l1_ratio=l1_ratio)
enet_sklearn.fit(X, y)

print('gradient vs sklean norm diff', np.linalg.norm(enet_grad.beta_ - enet_sklearn.coef_))
print('cd vs sklean norm diff', np.linalg.norm(enet_cd.beta_ - enet_sklearn.coef_))

And we see the solutions are different:

pyglmnet gradient vs cd 4.387709772237268
gradient vs sklean norm diff 4.776458575567959
cd vs sklean norm diff 2.198761137967092

I messed around with the optimization parameters (e.g. tol, learning_rate, max_iter) and could not get anyone to agree. Of course the word "different" is subject to what you think the numerical tolerance should be but, but these norms seem large enough to cause concern.

I should note here that if you do just ridge regression (i.e. l1_ratio=0) the answers do seem to agree.

Lasso

lasso_tune_param = 0.9

kws = {'distr': 'gaussian',
       'alpha': 1, # just solve using lasso
       'reg_lambda': lasso_tune_param, # lasso penalty parameter
      }

# solve with coordinate descent
lasso_cd = GLM(solver='cdfast', **kws)
lasso_cd.fit(X, y)

# solve with gradient descent
lasso_grad = GLM(solver='batch-gradient', **kws)
lasso_grad.fit(X, y)

print('pyglmnet gradient vs cd norm diff', np.linalg.norm(lasso_grad.beta_ - lasso_cd.beta_))

# solve with sklearn
lasso_sklearn = Lasso(alpha=lasso_tune_param)
lasso_sklearn.fit(X, y)

print('gradient vs sklearn norm diff', np.linalg.norm(lasso_grad.beta_ - lasso_sklearn.coef_))
print('cd vs sklearn norm diff', np.linalg.norm(lasso_cd.beta_ - lasso_sklearn.coef_))

Again we get different answers

pyglmnet gradient vs cd norm diff 13.940132420883737
gradient vs sklearn norm diff 13.548769667913621
cd vs sklearn norm diff 0.5271678205389818

Perhaps coordinate descent is getting close to sklearn, but I think that norm is still large enough to be concerning.

Software versions

I am using Python 3.6.12, pyglmnet 1.1 and sklean 0.24.1

jasmainak commented 3 years ago

Do you mind checking on the development version? I get:

pyglmnet gradient vs cd 0.8964272943835351 gradient vs sklean norm diff 0.0018012674728990078 cd vs sklean norm diff 0.8961793549855578

pyglmnet gradient vs cd norm diff 1.390974023209033 gradient vs sklearn norm diff 0.0014402782631833914 cd vs sklearn norm diff 1.3908436488985736

we have made some fixes since the release and I'm pretty confident the batch-gradient solver is correct. Regarding cdfast, I will defer to what @pavanramkumar says.

idc9 commented 3 years ago

I installed version 1.2.dev0 from github get the same answers as you do above i.e. batch-gradient seems correct but there seems to be an issue with cdfast.

sjkoelle commented 3 months ago

I am also currently experiencing this issue and investigating