Closed pavanramkumar closed 6 years ago
@pavanramkumar Travis is not happy.
are you sure it's an underflow/overflow problem? Because the gradient checks are fine on master
.
@jasmainak i think it depends in what range we're checking the gradients.
are you able to plot the loss on master
? i'm deducing over/underflow because the loss is running into nan
s
my hypothesis is:
beta
passes.let me explicitly check the betas.
Merging #250 into master will increase coverage by
0.99%
. The diff coverage is95.65%
.
@@ Coverage Diff @@
## master #250 +/- ##
==========================================
+ Coverage 58.44% 59.44% +0.99%
==========================================
Files 7 7
Lines 1302 1339 +37
Branches 261 262 +1
==========================================
+ Hits 761 796 +35
Misses 473 473
- Partials 68 70 +2
Impacted Files | Coverage Δ | |
---|---|---|
pyglmnet/pyglmnet.py | 79.46% <95.65%> (+1.24%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 65bb9f7...615c6b9. Read the comment docs.
@jasmainak made some more changes for the gradient check. approximate logL to prevent under/overflow did the trick.
See Eq. (17-20) here: https://pdfs.semanticscholar.org/0c03/0537919f09575b9f2c0a98c62f6571bdceee.pdf
I also did some better nan checks and reintroduced the "loss always decreases" test for probit/cdfast which I had commented out in the previous commit.
Tests pass!
@pavanramkumar I'm happy if Travis is happy :) I didn't look closely at the equations but I'm confident it's correct.
might have been worth it to put a link to the paper in the comments, as otherwise someone looking at the code in the future may be lost ...
closes #243