Open kellijohnson-NOAA opened 1 year ago
On the topic of QQ plots, I fixed a bug in residuals.sdmTMB()
yesterday where the offset wasn't getting included. This also affected sdmTMBextra::predict_mle_mcmc()
and sdmTMBextra::dharma_residuals()
. It looks like this code base is using an offset term. Sorry about that. Usually it makes sense to drop the offset when predicting on new data (predict for 1 effort unit or offset = 0), but obviously it doesn't make sense for residuals. It looks like I had tweaked the predict()
call within the residuals function to use the newdata
option when I added delta models and that version (vs. newdata = NULL
) drops the offset by default. I'll get this onto CRAN shortly, but for now the GitHub version is fixed.
Check that Q-Q plots are unique to a model because changing the distribution doesn't appear to change the resulting figure even if the model has a difference in likelihood.
An additional thought, from an email request ... A plot of quantile residual (y axis) vs predicted would be a useful diagnostic.
Thanks @seananderson for updating the codebase. I see that the recommended type
is not actually the default residual type. I tried the "mle-mcmc" with a delta model and my R session crashed. I will do some checking to see if this type is supported with a delta model. Currently the figure is being generated with the "mle-laplace" residuals, which is quick!
And, @okenk the figures that were placed on the network originally were only the qq plots for the first model component, not the rate model. I have updated to plotting both of them on the same figure. These are qq plots from canary for 2023 with the lognormal on the left and the gamma on the right.
Thanks! Also great that lognormal looks better since we had already picked that one. :)
Also tagging @brianlangseth-NOAA so he sees this.
@kellijohnson-NOAA Thank you for continuing to work on improving our diagnostic tools. Providing a QQ line for each of the models makes it easier to identify the error structure that best fits the data. Can I request that on the plot we modify the labeling in the legend from model 1 and 2 to "presence-absence" and "rate"? I don't think there is a pressing need to modify this now but I think long-term that labeling would be the clearest for users and reviewers.
You bet @chantelwetzel-noaa, I altered the legend text (see below). And, for those at large I investigated stats::qqline()
, which turns out to be different than abline(0,1)
, where I was doing the latter previously. Now, the figure has the equivalent of what would have been added to the figure from stats::qqline()
but per model.
This is great, thank you @kellijohnson-NOAA.
Will it be possible to get these plots for a model that's already been run, or is it easier to just update the package once you've completed this change and re-run the models?
If you're taking more requests, can I request that the colors not include yellow which is just hard to see when projected on a screen?
I just pushed the code @iantaylor-NOAA to main so you just need to reinstall and run
load("petrale_sole/wcgbts/delta_lognormal/index/sdmTMB_save.RData")
plot_qq(fit, "petrale_sole/wcgbts/delta_lognormal/index/qq.png")
I am for sure open to discussion about the colors but they have strategically been chosen by {viridis} to be easily accessible to those with disabilities or when printed in black and white.
Sometimes when {viridis} when only plotting two items where the colors default to that purple and yellow, I have modified the begin and end inputs to be: begin = 0 and end = 0.5 that modifies the two colors to be that purple and a teal instead of the yellow. However, I don't recall any projection issues using that default yellow to date, so perhaps that is not entirely necessary.
On residuals, I assume you've seen it, but just in case: https://pbs-assess.github.io/sdmTMB/articles/web_only/residual-checking.html
Yes, you can do MCMC residuals (with fixed effects at their MLE) with delta models. That's what they did in this paper. Those are currently set up for each model part independently. Those will be slow for large models. You may be able to get away with fewer iterations than the default and keep in mind that only the last iteration is kept as the residuals. I need to rethink the interface, because it currently forces you to sample from the model for each model component. You could sample yourself and then grab as many sets of residuals as you want all at once. I can point you to code if you are interested—it's basically pulling code out from the residuals.sdmTMB function.
There are DHARMa residuals (dharma_residuals()
, now over in sdmTMBextra), which should be fast and easy to calculate for the combined delta-model. However, I believe they are known to indicate issues even when the model is correct and I assume this goes back to the Laplace approximation, which is also affecting the 'default' residuals. I'm not sure if @Cole-Monnahan-NOAA or @Andrea-Havron-NOAA have advice based on their work.
There are also one-step-ahead residuals, which aren't set up right now.
I'm not sure what the evolving best practice is here. These (randomized quantal residuals with the Laplace approximation) and DHARMa residuals with the Laplace approximation can definitely look off even if the model is consistent with the data. MCMC residuals with fixed effects at their MLE seem to work well but take longer. That's what they used here.
?residuals.sdmTMB
https://pbs-assess.github.io/sdmTMB/articles/web_only/residual-checking.htmlNote that because of esoteric problems on CRAN, we had to split that out into its own (non-CRAN) package.
In a perfect world you'd sample with more MCMC iterations first.
_Originally posted by @seananderson in https://github.com/kellijohnson-NOAA/indexwc/pull/10#discussion_r1137503209_