florianhartig / DHARMa

Diagnostics for HierArchical Regession Models
http://florianhartig.github.io/DHARMa/
211 stars 22 forks source link

Dispersion calculations for mgcv tweedie distribution #415

Open florianhartig opened 5 months ago

florianhartig commented 5 months ago

Question via email: testDispersion shows overdispersion for an mgcv tweedie , although the overall distribution looks fine and not overdispersed

image image image
florianhartig commented 5 months ago

OK, what I can say after a bit of trying around and bugfix #417 that the data shows overdispersion in the default DHARMa test, and underdispersion when looking at Pearson residuals and the PearsonChi2 test.

fit <- readRDS("~/Downloads/Basalarea_fit_Pinus.strobus_tekc(25,50).rds")

# just to get a rough idea about the dispersion - seems more underdispersed
x = residuals(fit, type = "scaled.pearson")
sd(x)

res = simulateResiduals(fit, n = 250, plot = F)
testDispersion(res) # overdispersion 
testDispersion(res, type = "PearsonChisq") # underdispersion, although note 
# that this test is biased towards underdispersion for strong REs

As noted , the PearsonChi2 test has a bias towards underdispersion but looking at the sd of the Pearson residuals, I'm tending to think that this is not the issue here, and that the point is indeed that we have:

Note that depending on the residual distribution, the two outcomes are not mutually exclusive, as the two statistics measure different things.

One thing that I noted is that if the p parameter is set wrong when fitting the Tweedie, the DHARMa test is picking up overdispersion, while the PearsonChi2 test is not, see example here (note that this will currently only work with the development version of DHARMa).

f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
  (10 * x)^3 * (1 - x)^10
n <- 3000
x <- runif(n)
mu <- exp(f2(x)/3+.1);x <- x*10 - 4
y <- rTweedie(mu,p=1.5,phi=1.3)

# correct p
b <- gam(y~s(x,k=20),family=Tweedie(p=1.5))

res = simulateResiduals(b, plot = T)
testDispersion(res)
testDispersion(res, type = "PearsonChisq")

# incorrect p
b <- gam(y~s(x,k=20),family=Tweedie(p=1.1))

res = simulateResiduals(b, plot = T)
testDispersion(res) # reacts to the pattern 
testDispersion(res, type = "PearsonChisq") # doesn't react to the pattern

# reason is probably that average scaled Pearson are still fine 
x = residuals(b, type = "scaled.pearson")
sd(x)

Quite possibly, this or a similar issue could cause the residual pattern. As shown in the simulations, if the model is correctly specified, neither of the tests shows a problem, so something must be wrong.

All in all, while it's understandable that two tests with different test statistics may have diverging results, I don't find it super desirable that the tests are diverging so far. I'll keep this open to re-think the dispersion statistics. Problem is, as we tried, that simulated Pearson residuals don't work because of the frequent sd = 0 numeric problem. So, I currently don't see what test statistic I could generate that is closer to Pearson but stable under simulations .