Closed cxrodgers closed 4 years ago
hmm ... what you say makes sense to me. @pavanramkumar any comments?
you might also want to check out: https://github.com/glm-tools/pyglmnet/pull/278. It's not merged yet because the fix didn't appear clean. But perhaps you have an opinion there
we have now changed the convergence criteria and it pretty much aligns with what you proposed @cxrodgers . Closing this for now. Feel free to reopen if you think the problem persists.
One thing I've noticed is that models with large
beta
take a lot longer to reach convergence than those with a smallbeta
. I wonder if it has to do with this line: https://github.com/glm-tools/pyglmnet/blob/7f1fbb7feae4cd6f18d3d830a90e4b28f9fbfdaf/pyglmnet/pyglmnet.py#L767Convergence is assessed by:
np.linalg.norm(beta - beta_old) < tol
.Why not:
np.linalg.norm(beta - beta_old) / np.linalg.norm(beta) < tol
Would that make sense? Basically, we just want to know that our estimate of beta is not changing more than X% (e.g., 0.1%) between iterations. I find this more interpretable than an absolute change in the beta values.But I don't know if this makes sense from a theoretical perspective. Thanks for any comments!