Closed mfasiolo closed 6 years ago
Actually, it turns out that often Newton (or BFGS) does not converge when log-sigma is fairly high, which results in very smooth fits, especially for the model for sigma. In particular, the edf = 1 for the scale model in such cases, and the corresponding element(s) of the REML gradient are small (10^-4).
Notice also that this might be related to this bug:
https://github.com/mfasiolo/qgam/issues/13
where it turned out that when lambda is a vector things were completely messed up in tuneLearn and tuneLearnFast.
Probably solved by 45290e00837356d74ede239114b974efdb56cd6a
When using tuneLearn or tuneLearnFast with logFlss, I often get complaints like:
log(sigma) = 0.428571 : outer Newton did not converge.
This happens even for not-so-extreme quantiles (say qu = 0.4).
Notice that this are simpleWarnings, so they are seen only if gam() gets called within a withCallingHandlers() call.
Maybe it makes sense to mtrace() mgcv:::newton and see what is happening?