Open bbbales2 opened 4 years ago
Thanks for posting code. I was trying to verify it and I got something different:
Computed:
lp1.val(): -29.4707
lp2.val(): -29.4706
y.adj(): 0
alpha.adj(): 0
beta.adj(): 0
lp2.d_: -6.70499e+12
Reference:
lp1.val(): -29.4706
lp2.val(): -29.4706
y.adj(): -2.83796
alpha.adj(): 2.68924
beta.adj(): -11.3518
lp2.d_: -11.5005
Is there some reason my output is different than yours (for even the .val()
)?
@syclik aaah, yeah you're right I updated the numbers in the reference code without regenerating the output. I updated the post.
I think the issue is that the fwd gamma_q is an interative algorithm with what might be a coarse tolerance: https://github.com/stan-dev/math/blob/develop/stan/math/fwd/fun/gamma_q.hpp#L32
And for rev gamma_p the gradient gets thresholded to zero: https://github.com/stan-dev/math/blob/develop/stan/math/rev/fun/gamma_p.hpp#L39
Should we try what boost offers here:
Maybe this stuff is new? I don't know the history, but this is worth considering.
Maybe. It looks like that gives one of the two gradients in each case. I'm not sure which is causing the problems. I just made the issue so it's known (didn't really plan on fixing it)
Description
gamma_p doesn't work great with vars and gamma_q doesn't work great with
fvar<double>
Here is test code:
The output for gamma_p and gamma_q are (computing the same function, lp1 is with gamma_p and lp2 is with gamma_q):
And the reference values are here:
(edit: updated output)
Current Version:
v3.3.0