runehaubo / old-lmerTest

**OLD** lmerTest version - see runehaubo/lmerTestR for the new one
2 stars 0 forks source link

Feature request: Throw error instead of message for failure of Satterthwaite's approximation #5

Closed gdevenyi closed 6 years ago

gdevenyi commented 6 years ago

I'm currently using ddply to compute an lmer model and get summary statistics across subsets of dataframe.

I expect some of these models to fail to converge, and I'm okay with that. I have wrapped the code in a tryCatch to return NAs for the values.

Unfortunately, when lmerTest encounters an error, it prints a message but does not raise an error, meaning I cannot catch it.

Error in calculation of the Satterthwaite's approximation. The output of lme4 package is returned
summary from lme4 is returned
runehaubo commented 6 years ago

Thanks,

Which version of lmerTest are you using? If you are using the CRAN version, it may already be fixed in the devel version on GitHub. Try library("devtools"); install_github("runehaubo/lmerTest") if you didn't already. Or even better: try our the new revised lmerTest with library("devtools"); install_github("runehaubo/lmerTestR").

If that doesn't work, please post a code snippet or better yet, a self contained example so that I know exactly what you would like to work better.

Rune

gdevenyi commented 6 years ago

Thanks for the quick feedback.

I'm running 2.0-36, but I see that github still has the same bit of code causing me trouble

I'm referring specifically to this: https://github.com/runehaubo/lmerTest/blob/master/R/satterth.R#L150-L154

I totally expect something like this to happen as I'm running some "bad" lmer models during the process. However by using message I don't have a way to capture this problem with a tryCatch.

If you were to upgrade this to a warning or error, I could use tryCatch to handle it.

I have a workaround right now, to check the length of the return summary object I'm storing and append NAs where the missing outputs lmerTest would normally provide.

Example of what I'm doing:

inside_vs_outside <- function(x) {
  out <- tryCatch({
    localdata = droplevels(x)
    result = summary(lmer(log(signal) ~ sex + age + (1|donor_id/probe_id) + inside, data = localdata))
    if (length(result$coefficients["insideYes",]) < 5) {
      return(c(NA,NA,NA,NA,NA))
    } else {
      return(result$coefficients["insideYes",])}
    },
    error = function(e) {
      return(c(NA,NA,NA,NA,NA))
    })
  return(out)
}

results = ddply(data, ~ gene_symbol, inside_vs_outside, .parallel=TRUE)

Due to various quality issues with the publicly available input data, some subsets result in a broken model, which I'm okay with, but by lmerTest "silently" returning the regular summary output I get a different length output which ddply doesn't like. The input dataframe is sadly 13GB on disk and I'm not exactly sure where it fails since I'm doing so many models, so I can't right now narrow it down to a smaller example for you.

runehaubo commented 6 years ago

We are currently preparing the new lmerTest for its CRAN release (cf. https://stat.ethz.ch/pipermail/r-sig-mixed-models/2018q1/026596.html and https://github.com/runehaubo/lmerTestR - notice the R in lmerTestR) so I would rather not try to fix this in the old repo/version.

The new version is better implementation exactly when it comes to the computation of Satterthwaite's df, and the code that gives you trouble no longer exists. Without a reproducible example it is hard to tell what exactly will happen, but my guess is that some of the fits that previously didn't work will work with the new version and that you may get some additional warnings about models that didn't converge (some or all will be on top of convergence warnings from lme4::lmer). All in all, no messages, but perhaps additional warnings which you can turn into errors with options(warn=2).

Just like lme4::lmer returns the fitted object when a model failed to converge, lmerTest::lmer also returns the fitted model and will let you compute degrees of freedom and p-values using Satterthwaite's method: assessing if the models converged 'well enough' is ultimately left for you to judge.

Cheers Rune

gdevenyi commented 6 years ago

Okay, if there will be a new implementation I will look into that and leave this a legacy issue I've found a workaround for. Thanks.

gdevenyi commented 6 years ago

One final update, lmerTestR repo version produces no errors with my 29,130 models, some of which were probably ill conditioned :+1:

runehaubo commented 6 years ago

That sounds like a pretty comprehensive test :-) Thanks for letting us know!