Open idc9 opened 3 years ago
Do you mind checking on the development version? I get:
pyglmnet gradient vs cd 0.8964272943835351 gradient vs sklean norm diff 0.0018012674728990078 cd vs sklean norm diff 0.8961793549855578
pyglmnet gradient vs cd norm diff 1.390974023209033 gradient vs sklearn norm diff 0.0014402782631833914 cd vs sklearn norm diff 1.3908436488985736
we have made some fixes since the release and I'm pretty confident the batch-gradient solver is correct. Regarding cdfast
, I will defer to what @pavanramkumar says.
I installed version 1.2.dev0 from github get the same answers as you do above i.e. batch-gradient seems correct but there seems to be an issue with cdfast.
I am also currently experiencing this issue and investigating
The pyglment package gives different estimated coefficients for linear regression depending on which solver you select (e.g. cdfast or batch-gradient). Neither solver agrees with sklearn (I believe they should agree!) The code below gives an example for both ElasticNet and vanilla Lasso.
Reproducible examples
ElasticNet
First let's check elastic net
And we see the solutions are different:
I messed around with the optimization parameters (e.g. tol, learning_rate, max_iter) and could not get anyone to agree. Of course the word "different" is subject to what you think the numerical tolerance should be but, but these norms seem large enough to cause concern.
I should note here that if you do just ridge regression (i.e. l1_ratio=0) the answers do seem to agree.
Lasso
Again we get different answers
Perhaps coordinate descent is getting close to sklearn, but I think that norm is still large enough to be concerning.
Software versions
I am using Python 3.6.12, pyglmnet 1.1 and sklean 0.24.1