florianhartig / DHARMa

Diagnostics for HierArchical Regession Models
http://florianhartig.github.io/DHARMa/
201 stars 21 forks source link

Error in runBenchmarks if parallel = T in windows #332

Open danielrettelbach opened 2 years ago

danielrettelbach commented 2 years ago

Benchmarking DHARMa dispersiontests against AER dispersiontest for Poisson GLM

After running this chunk of the rmd file I have the reaccuring error that the function is not properly handed over.

" out = runBenchmarks(doCalculations, controlValues = dispValues , nRep = 10, parallel = T) parallel, set cores automatically to 11 Error in checkForRemoteErrors(val) : 10 nodes produced errors; first error: Objekt 'doCalculations' nicht gefunden "

The nodes producing errors depend on the cores -1 that are automatically set.

florianhartig commented 2 years ago

Which Rmd file?

florianhartig commented 2 years ago

Could you maybe just post the code that produces the error?

danielrettelbach commented 2 years ago

https://github.com/florianhartig/DHARMa/pull/333 I created a Pull Request which seems to fix the problem.

florianhartig commented 2 years ago

OK, but that PR definitely does more than fix the problem, so for example I definitely don't want a sleep argument in the simulation function. From what I can see, you maybe had a package export problem?

If you can provide me with the code that fails, I can have a look what the problem is.

danielrettelbach commented 2 years ago

It is the Code from the testPower.rmd file in DHARMa

Benchmarking DHARMa dispersiontests against AER dispersiontest for Poisson GLM


overdisp_fun <- function(model) {
  ## number of variance parameters in 
  ##   an n-by-n variance-covariance matrix
  vpars <- function(m) {
    nrow(m)*(nrow(m)+1)/2
  }
  model.df <- sum(sapply(VarCorr(model),vpars))+length(fixef(model))
  rdf <- nrow(model.frame(model))-model.df
  rp <- residuals(model,type="pearson")
  Pearson.chisq <- sum(rp^2)
  prat <- Pearson.chisq/rdf
  pval <- pchisq(Pearson.chisq, df=rdf, lower.tail=FALSE)
  list(chisq=Pearson.chisq,ratio=prat,rdf=rdf,p=pval)
}
doCalculations <- function(control = 0){
  testData = createData(sampleSize = 200, family = poisson(), overdispersion = control, randomEffectVariance = 1)
  fittedModel <- glmer(observedResponse ~ Environment1 + (1|group), data = testData, family = poisson())

  out = list()

  res <- simulateResiduals(fittedModel = fittedModel, n = 250)
  out$uniformTest = testUniformity(res)$p.value  
  out$Dispersion = testDispersion(res, plot = F)$p.value  
  out$DispersionAER = overdisp_fun(fittedModel)$p.value

  res <- simulateResiduals(fittedModel = fittedModel, n = 250, refit = T)  
  out$DispersionRefitted = testDispersion(res, plot = F)$p.value  
  return(unlist(out))
}
# testing a single return
doCalculations(control = 0.3)
dispValues = seq(0,1.2, len = 5)
# running benchmark
out = runBenchmarks(doCalculations, controlValues = dispValues , nRep = 10,  parallel = T)
tests = c("uniformity", "DHARMa disp basic" , "GLMER dispersiontest", "DHARMa disp refit")