Closed bbolker closed 8 months ago
Thanks for the debugging @bbolker - your suspicion is correct. There's an integer overflow in jh=ceil(j)
where jh
is int and j=3012750346.5651932
is a double greater than the max representable.
Thanks @kaskr!
This must be tickling some bug in our Tweedie code. From Stack Overflow, a Tweedie model fit that reliably crashes R with
The next code chunk will crash your R session
Let's set up the model piecewise so that we can enable tracing from within TMB:
Now run the optimization by hand. this also crashes
The last parameters printed are:
but just calling the function with these approximate (rounded) parameters is not close enough to cause the crash:
I can get better precision this way:
but this doesn't help me identify "bad" parameters.
It seems like there's some kind of leak/state-dependence (i.e., the crash depends on the full trajectory of the optimization/history of calls to the objective function, not just the last one), or whether there is some delicate numerical criterion that causes a crash, and we are only approximately there ...
A couple more pieces of information:
close to, but not identical to, the values from a previous run. Some kind of undefined behaviour (e.g. accessing an uninitialized memory location??)
@kaskr, sorry to bother you ... maybe the next step is to run this within valgrind etc. ? Here is the backtrace from running the code inside
gdb
: backtrace.txtThese seem to be the most useful bits ... ??
I think if I were going to obsess further over this I would probably instrument the Tweedie code in TMB to tell me what size vector it thought it was allocating here; could
nterms
be a negative value that integer-overflows to something gigantic ... ???