martablangiardo / ExcessDeathsItaly

5 stars 1 forks source link

Question about code errors and output interpretation #1

Open ClaudMor opened 3 years ago

ClaudMor commented 3 years ago

Hello,

Thanks for making the methods of your research publicly available. Despite following your instructions, namely installing the packages INLA,ggplot2, dplyr, gridExtra , and creating the folders ./Output/Females and ./Output/Males, if we run the make.functions.R script:

# Code for the paper
# "Estimating weekly excess mortality at sub-national level in Italy during the COVID-19 pandemic"
# by Marta Blangiardo, Michela Cameletti, Monica Pirani, Gianni Corsetti, Marco Battaglini, Gianluca Baio

# Last version: 13/08/2020

# This file contains some functions used in the Model_Run.R file
################################################################################
# Functions for preparing the data

make.data=function(macro.regions=macro.regions,area,Sex) {
  # Makes the data by subsetting the relevant area and removing the data for 2020 and creating relevant indices
  data=macro.regions[[area]] %>% filter(Anno<max(Anno),sex==Sex) %>% mutate(SIGLA=droplevels(SIGLA))
  data$ID1=(data %>% mutate(IDarea=group_indices(.,ID_Ita)))$IDarea
  data=data %>% mutate(ID_prov=group_indices(.,SIGLA)) %>% select(-c(nord,centro,sud)) %>% select(sex,everything())
  data=data %>% mutate(IDtemp=as.numeric(as.character(IDtemp)))
  return(data)
}

################################################################################
# Function for simulating from the posterior distributions

make.posteriors = function(area=c("NordOvest","Lombardia","NordEst","Centro","Sud"),
                           Sex,nsim=1000){
  # area = macro region to analyse
  # sex = Females or Males
  # nsim = number of simulations from the posterior distribution

  require(INLA)
  require(dplyr)

  # Loads the INLA object with the output
  file=paste0("Output/",Sex,"/output",area,".Rdata")
  load(file)

  # Data file
  data=as_tibble(m$.args$data)
  distr=m$.args$family

  # Now samples from the approximate joint posterior distribution of all the parameters
  tic=proc.time()
  post.samp=inla.posterior.sample(n=nsim,m,selection=list(
    "(Intercept)"=1,
    "week"=1:nrow(m$summary.random$week),
    "ID1"=1:(nrow(m$summary.random$ID1)/2), #NB Only needs the first half as there are 2 components to BYM
    "IDtemp"=1:nrow(m$summary.random$IDtemp),
    "Anno"=4
  ),
  num.threads = round(parallel::detectCores()*.8),
  verbose=FALSE
  )
  toc=proc.time()-tic
  # Formats simulations in a matrix (like BUGS would do)
  sim=matrix(unlist(lapply(post.samp,function(x) x$latent[1:nrow(post.samp[[1]]$latent),])),ncol=nrow(post.samp[[1]]$latent),byrow=T)
  colnames(sim)=rownames(post.samp[[1]]$latent)

  # Now simulates from the posterior of the hyperparameters (to get sd for the 'Anno' effect)
  sd.anno=sqrt(1/inla.hyperpar.sample(nsim,m,improve.marginals=TRUE)[,"Precision for Anno"])
  # Then simulates from the predictive distribution for Anno=5
  anno.2020=rnorm(nsim,0,sd.anno)
  # Then substitutes the relevant column in 'sim'
  sim[,grep("Anno",colnames(sim))]=anno.2020
  file=paste0("Output/",Sex,"/posteriors",area,".Rdata")
  save(sim,data,distr,file=file)
}

################################################################################
# Function for computing the predictions for year 2020

make.predictions=function(area=c("NordOvest","Lombardia","NordEst","Centro","Sud"),Sex){
  # area = macro region to analyse
  # sex = Females or Males
  # nsim = number of simulations from the posterior distribution

  require(dplyr)

  # Loads mortality & temperature data + data used to run INLA
  load("Data/mortality_temperature_data.Rdata")

  # Loads posteriors simulations
  file=paste0("Output/",Sex,"/posteriors",area,".Rdata")
  load(file)
  nsim=nrow(sim)

  # matrix of covariates (profiles) to compute the SMRs and predicted
  data.pred=data %>% filter(Anno==1) %>% select(-temperature,-temp_grp,-IDtemp,-morti,-E) 
  data.pred=data.pred %>% left_join(tempdata) %>% 
    mutate(ID1=as.factor(ID1),week=as.factor(week),ID_prov=as.factor(ID_prov))

  formula.covs=~ID1+week:ID_prov+Anno+IDtemp
  X=as_tibble(model.matrix(formula.covs,
                           data=data.pred,contrasts.arg=lapply(data.pred[,c("ID1","week","IDtemp")],contrasts,contrasts=FALSE))) %>% 
    select(contains("ID1"),contains("week"),contains("Anno"),contains("IDtemp"),`(Intercept)`)
  # NB: in Sud, the temperature is *always* higher than the first bin, so need to remove on column 
  if(area=="Sud") {
    X=X %>% select(-IDtemp1)
  }

  # Now computes logSMR as the linear predictor (on the log scale) = X\beta
  log.smr=as.matrix(X)%*%t(sim)
  # Renames the columns to be sim1,sim2,...,simnsim
  colnames(log.smr)=paste0("sim",1:nrow(sim))
  if(distr=="poisson") {
    SMR=bind_cols(as_tibble(exp(log.smr)),data.pred) %>% 
      select(COD_PROVCOM,COMUNE,COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Anno,everything()) %>% 
      mutate(Anno=5) %>% 
      left_join(morti_ordered %>% filter(sex==Sex,Anno==max(Anno)) %>% mutate(week=as.factor(week))) %>% 
      select(COD_PROVCOM,COMUNE,COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,morti,E,everything()) %>% rename(Obs=morti)
    rm(log.smr)
    SMR=SMR %>% mutate(mean=apply(as.matrix(SMR %>% select(contains("sim"))),1,mean),
                       sd=apply(as.matrix(SMR %>% select(contains("sim"))),1,sd),
                       low=apply(as.matrix(SMR %>% select(contains("sim"))),1,quantile,.025),
                       upp=apply(as.matrix(SMR %>% select(contains("sim"))),1,quantile,.975)) 
    SMR=SMR %>% select(COD_PROVCOM,COMUNE,COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,E,Obs,mean,sd,low,upp,contains("sim"))

    # Now makes matrix of simulations from the predictive distribution of outcome. First creates the linear predictor on the natural scale
    linpred=SMR$E*as.matrix(SMR %>% select(contains("sim")),nrow=nrow(SMR),byrow=TRUE)
    # Different simulation scheme, depending on whether model is ZIP or Poisson
    # Simulates from a simple Poisson with parameter linpred
    pred=t(apply(linpred,1,function(x) rpois(nsim,x)))
    colnames(pred)=paste0("sim",1:nsim)
    prediction=bind_cols(as_tibble(pred),SMR %>% select(-contains("sim")))
    prediction=prediction %>% mutate(mean=apply(as.matrix(prediction %>% select(contains("sim"))),1,mean,na.rm=T),
                                     median=apply(as.matrix(prediction %>% select(contains("sim"))),1,median,na.rm=T),
                                     sd=apply(as.matrix(prediction %>% select(contains("sim"))),1,sd,na.rm=T),
                                     low=apply(as.matrix(prediction %>% select(contains("sim"))),1,quantile,.025,na.rm=T),
                                     upp=apply(as.matrix(prediction %>% select(contains("sim"))),1,quantile,.975,na.rm=T)) %>% 
      select(COD_PROVCOM,COMUNE,COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,E,mean,sd,low,median,upp,contains("sim"))
  }

  # Selects relevant regions for each macro-area (to be used to filter out relevant comuni)
  relevant=list()
  relevant[[1]]=c("Piemonte","Valle d'Aosta", "Liguria")
  relevant[[2]]=c("Lombardia")
  relevant[[3]]=c("Emilia-Romagna", "Trentino-Alto Adige","Veneto","Friuli Venezia Giulia")
  relevant[[4]]=c("Abruzzo","Lazio","Marche","Molise","Toscana","Umbria")
  relevant[[5]]=c("Basilicata","Calabria","Campania","Puglia","Sardegna","Sicilia")
  names(relevant)=c("NordOvest","Lombardia","NordEst","Centro","Sud")

  # Selects output of the function --- removes comuni outside the relevant regions
  res=list(SMR=SMR %>% filter(DEN_REG%in%relevant[[area]]),
           prediction=prediction %>% filter(DEN_REG%in%relevant[[area]]))
  file=paste0("Output/",Sex,"/predictions",area,".Rdata")
  save(res,file=file)
}

####################################################################################
#Functions for visualizing the results

vis.comuni=function(pred,comune,...) {
  require(ggplot2)
  rg=range((pred$prediction %>% filter(COMUNE==comune) %>% arrange(week) %>% select(mean,Obs)))
  dates=as.Date("2020-01-01")
  for (i in 2:max(pred$prediction$week)) {
    dates[i]=dates[(i-1)]+7
  }
  dates=format(dates,format="%d%b")
  theme_set(theme_bw())
  ggplot(pred$prediction %>% filter(COMUNE==comune) %>% arrange(week),aes(week,mean)) + 
    geom_line(aes(y=mean),color="blue") + 
    geom_ribbon(aes(ymin=low,ymax=upp),alpha=.2) +
    scale_x_continuous("Week", breaks=1:length(dates), labels=dates) +
    geom_point(data=pred$prediction %>% filter(COMUNE==comune) %>% arrange(week),aes(week,Obs),color="red") +
    geom_line(data=pred$prediction %>% filter(COMUNE==comune) %>% arrange(week),aes(week,Obs),color="red") +
    theme(axis.text.x = element_text(color="black",size=13,angle=0,hjust=.5,vjust=.5,face="plain"),
          axis.text.y = element_text(color="black",size=13,angle=0,hjust=.5,vjust=.5,face="plain"),  
          axis.title.x = element_text(color="black",size=14,angle=0,hjust=.5,vjust=.5,face="plain"),
          axis.title.y = element_text(color="black",size=14,angle=90,hjust=.5,vjust=.5,face="plain")) +
    labs(y="All causes deaths",title=paste0("Municipality of ",comune," (",(pred$prediction %>% filter(COMUNE==comune))$DEN_REG,")")) +
    theme(axis.line = element_line(colour = "black"),
          panel.grid.major = element_blank(),
          panel.grid.minor = element_blank(),
          panel.border = element_blank(),
          panel.background = element_blank(),
          plot.title = element_text(size=18)) +
    geom_point(shape=20,colour="red",size=3,aes(x=.92,y=rg[2]*1.035)) +
    annotate(geom="segment",x=.8,y=rg[2]*1.052,xend=1.04,yend=rg[2]*1.052,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.048,xend=1.04,yend=rg[2]*1.048,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.05,xend=1.04,yend=rg[2]*1.05,color="blue",size=.8) +
    annotate(geom="text",x=1.15,y=rg[2]*1.05,label="Predicted deaths (mean and 95% interval)",size=5,vjust=.5,hjust=0) +
    annotate(geom="text",x=1.15,y=rg[2]*1.035,label="Observed deaths",size=5,vjust=.5,hjust=0)
}

vis.rate=function(pred,comune,sex="Male",...) {
  # pred = a list with the model predictions
  # region = a string or a vector of strings with the names of the region(s) to plot
  exArgs=list(...)
  if(exists("title",exArgs)){title=exArgs$title} else {title=paste0("Municipality of ",comune," (",(pred$prediction %>% filter(COMUNE==comune))$DEN_REG,")")}
  if(exists("scale",exArgs)){scale=exArgs$scale} else {scale=100000}

  require(ggplot2)
  dates=as.Date("2020-01-01")
  for (i in 2:max(pred$prediction$week)) {
    dates[i]=dates[(i-1)]+7
  }
  dates=format(dates,format="%e\n%b")
  if(sex=="Male"){
    pred$prediction=pred$prediction %>% mutate(Total=Total.male)
  } else {
    pred$prediction=pred$prediction %>% mutate(Total=Total.female)
  }

  datatemp=pred$prediction %>% filter(COMUNE==comune) %>% arrange(week) %>% 
    mutate(rate=mean/Total*scale,rate.low=low/Total*scale,rate.upp=upp/Total*scale,rateobs=Obs/Total*scale)

  # Defines plot range
  rg=range(datatemp %>% select(rate.low,rate.upp,rateobs))

  # Make the plot
  theme_set(theme_bw())
  pl=ggplot(data=datatemp,aes(week,rate)) + 
    geom_line(aes(y=rate),color="blue") + 
    geom_ribbon(aes(ymin=rate.low,ymax=rate.upp),alpha=.2) +
    scale_x_continuous("Week", breaks=1:length(dates), labels=dates) 
  if(exists("ylim",exArgs)){
    rg=exArgs$ylim; rg[2]=rg[2]*1.055
    pl+scale_y_continuous(limits=rg)
  }
  pl+geom_point(data=datatemp,aes(week,rateobs),color="red") +
    geom_line(data=datatemp,aes(week,rateobs),color="red") +
    theme(axis.text.x = element_text(color="black",size=10,angle=0,hjust=.5,vjust=.5,face="bold", family="Times"),
          axis.text.y = element_text(color="black",size=10,angle=0,hjust=.5,vjust=.5,face="bold", family="Times"),  
          axis.title.x = element_text(color="black",size=11,angle=0,hjust=.5,vjust=.5,face="bold", family="Times"),
          axis.title.y = element_text(color="black",size=11,angle=90,hjust=.5,vjust=.5,face="bold", family="Times")) +
    theme(axis.line = element_line(colour = "black"),
          panel.grid.major = element_blank(),
          panel.grid.minor = element_blank(),
          panel.border = element_blank(),
          panel.background = element_blank(),
          plot.title = element_text(size=10, face="bold", family="Times")) + 
    labs(y=paste0("Mortality rate for all causes (x",format(scale,big.mark=",",scientific=999),")"),title=title) +
    geom_point(shape=20,colour="red",size=3,aes(x=.92,y=rg[2]*1.01)) +
    annotate(geom="segment",x=.8,y=rg[2]*1.052,xend=1.04,yend=rg[2]*1.052,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.048,xend=1.04,yend=rg[2]*1.048,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.05,xend=1.04,yend=rg[2]*1.05,color="blue",size=.8) +
    annotate(geom="text",x=1.15,y=rg[2]*1.05,family="Times",
             label=paste0("Predicted mortality rate x",format(scale,big.mark=",",scientific=999)," (mean and 95% interval)"),size=3.8,vjust=.5,hjust=0) +
    annotate(geom="text",x=1.15,y=rg[2]*1.01,family="Times",
             label=paste0("Observed mortality rate x" ,format(scale,big.mark=",",scientific=999)),size=3.8,vjust=.5,hjust=0)+
    coord_cartesian(ylim = c(rg[1],rg[2]*1.05)) 
}

vis.xs.rate=function(pred,comune,...) {
  require(ggplot2)
  dates=as.Date("2020-01-01")
  for (i in 2:max(pred$prediction$week)) {
    dates[i]=dates[(i-1)]+7
  }
  dates=format(dates,format="%e\n%b")

  # Defines plot range
  rg=100*range((pred$prediction %>% filter(COMUNE==comune) %>% arrange(week) %>% select(low.excess,mean.excess)),na.rm=TRUE)

  # Make the plot
  theme_set(theme_bw())
  ggplot(pred$prediction %>% filter(COMUNE==comune) %>% arrange(week),aes(week,100*mean.excess)) + 
    geom_line(aes(y=100*mean.excess),color="blue") + 
    geom_ribbon(aes(ymin=100*low.excess,ymax=100*upp.excess),alpha=.2) +
    scale_y_continuous("Percentage excess mortality",breaks=seq(-200,100,by=50),labels=seq(-200,100,by=50),limits=c(-290,105)) +
    scale_x_continuous("Week", breaks=1:length(dates), labels=dates) +
    geom_hline(yintercept=0,size=.8, linetype="dashed") +
    geom_hline(yintercept=50,size=.8, linetype="dashed",color="red") +
    geom_hline(yintercept=100,size=.8, linetype="dashed",color="red") +
    theme(axis.text.x = element_text(color="black",size=10,angle=0,hjust=.5,vjust=.5,face="bold", family="Times"),
          axis.text.y = element_text(color="black",size=10,angle=0,hjust=.5,vjust=.5,face="bold", family="Times"),  
          axis.title.x = element_text(color="black",size=11,angle=0,hjust=.5,vjust=.5,face="bold", family="Times"),
          axis.title.y = element_text(color="black",size=11,angle=90,hjust=.5,vjust=.5,face="bold", family="Times")) +
    theme(axis.line = element_line(colour = "black"),
          panel.grid.major = element_blank(),
          panel.grid.minor = element_blank(),
          panel.border = element_blank(),
          panel.background = element_blank(),
          plot.title = element_text(size=12, face="bold", family="Times")) +
    labs(y="Percentage excess mortality",title=paste0("Municipality of ",comune," (",(pred$prediction %>% filter(COMUNE==comune))$DEN_REG,")")) +
    annotate(geom="text",x=1,y=50,label="1.5-fold increase",size=3.8,vjust=-1,hjust=0,family="Times") +
    annotate(geom="text",x=1,y=100,label="2-fold increase",size=3.8,vjust=-1,hjust=0,family="Times") +
    coord_cartesian(ylim = c(-100,105)) 
}

plot.smr=function(pred,comune) {
  require(ggplot2)
  dates=as.Date("2020-01-01")
  for (i in 2:max(pred$prediction$week)) {
    dates[i]=dates[(i-1)]+7
  }
  dates=format(dates,format="%d%b")
  rg=range(pred$SMR %>% filter(COMUNE==comune) %>% arrange(week) %>% mutate(rate=Obs/E) %>% select(mean,rate))
  theme_set(theme_bw())
  ggplot(pred$SMR %>% filter(COMUNE==comune) %>% arrange(week),aes(week,mean)) + 
    geom_line(aes(y=mean),color="blue") + 
    geom_ribbon(aes(ymin=low,ymax=upp),alpha=.2) +
    scale_x_continuous("Week", breaks=1:length(dates), labels=dates) +
    geom_point(data=pred$SMR %>% filter(COMUNE==comune) %>% arrange(week) %>% mutate(rate=Obs/E),aes(week,rate),color="red") +
    geom_line(data=pred$SMR %>% filter(COMUNE==comune) %>% arrange(week) %>% mutate(rate=Obs/E),aes(week,rate),color="red") +
    theme(axis.text.x = element_text(color="black",size=13,angle=0,hjust=.5,vjust=.5,face="plain"),
          axis.text.y = element_text(color="black",size=13,angle=0,hjust=.5,vjust=.5,face="plain"),  
          axis.title.x = element_text(color="black",size=14,angle=0,hjust=.5,vjust=.5,face="plain"),
          axis.title.y = element_text(color="black",size=14,angle=90,hjust=.5,vjust=.5,face="plain")) +
    labs(y="Standardised Mortality Ratio",title=paste0("Municipality of ",comune," (",(pred$prediction %>% filter(COMUNE==comune))$DEN_REG,")")) +
    theme(axis.line = element_line(colour = "black"),
          panel.grid.major = element_blank(),
          panel.grid.minor = element_blank(),
          panel.border = element_blank(),
          panel.background = element_blank(),
          plot.title= element_text(size=18)) +
    geom_point(shape=20,colour="red",size=3,aes(x=1.0,y=rg[2]*1.035)) +
    annotate(geom="segment",x=.8,y=rg[2]*1.052,xend=1.04,yend=rg[2]*1.052,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.048,xend=1.04,yend=rg[2]*1.048,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.05,xend=1.04,yend=rg[2]*1.05,color="blue",size=.8) +
    annotate(geom="text",x=1.15,y=rg[2]*1.05,label="Predicted SMR (mean and 95% interval)",size=5,vjust=.5,hjust=0) +
    annotate(geom="text",x=1.15,y=rg[2]*1.035,label="Observed SMR",size=5,vjust=.5,hjust=0)
}

make.map=function(w) {
  # Make map using ggplot
  require(ggplot2)
  toplot=prov.shp
  toplot@data=as_tibble(prov.shp@data %>% mutate(COD_REG=as.numeric(as.character(COD_REG)),
                                                 COD_PROV=as.numeric(as.character(COD_PROV)))) %>% 
    left_join(pred.italy$prediction_prov %>% filter(week==w) %>% 
                # NB: Need to recode the COD_PROV for the 4 new provinces otherwise won't merge
                mutate(COD_PROV=case_when(
                  COD_PROV==104~108,
                  COD_PROV==105~109,
                  COD_PROV==106~110,
                  COD_PROV==107~111,
                  TRUE~as.numeric(COD_PROV)))
    )

  theme_set(theme_bw())
  dates=as.Date("2020-01-01")
  for (i in 2:max(pred.italy$prediction_prov$week)) {
    dates[i]=dates[(i-1)]+7
  }
  dates=format(dates,format="%d%b")
  shades=colorRampPalette(c("white","blue","red"))#c("grey100", "grey30"))
  ###shades=RColorBrewer::brewer.pal(4,"Greens")
  ggplot(data=sf::st_as_sf(toplot,coords=c(SHAPE_AREA,SHAPE_LEN))) +
    geom_sf()+
    geom_sf(data = sf::st_as_sf(toplot,coords=c(SHAPE_AREA,SHAPE_LEN)), aes(fill = mean.excess),
            ## This controls the thickness of the borders
            size=.4) +
    ### Check this: https://stackoverflow.com/questions/49497182/ggplot-create-a-border-overlay-on-top-of-map
    ### to also plot the regions borders on top of the provinces
    geom_sf(data = sf::st_as_sf(toplot,coords=c(SHAPE_AREA,SHAPE_LEN)) %>% group_by(COD_REG) %>% summarise(),            
            fill="transparent",color="gray20",size=1.1) +
    theme(axis.line=element_blank(),axis.text.x=element_blank(),
          axis.text.y=element_blank(),axis.ticks=element_blank(),
          axis.title.x=element_blank(),
          axis.title.y=element_blank()) +
    scale_fill_gradientn(
      colours=shades(20),
      name="Posterior mean of excess mortality",
      na.value = "grey100", 
      limits=c(-2.5,1.5)
      #trans = "log",
      #breaks = my_breaks, labels = my_breaks
    ) +  
    ggtitle(paste0("Week of ",dates[w])) 
  #theme(legend.position = pos)
}

vis.trends=function(pred,region,...) {
  # pred = a list with the model predictions
  # region = a string or a vector of strings with the names of the region(s) to plot
  exArgs=list(...)
  if(exists("quants",exArgs)){q=exArgs$quants} else {q=c(.025,.975)}
  if(exists("title",exArgs)){title=exArgs$title} else {title=paste0(region,collapse=", ")}

  require(ggplot2)
  # creates temporary dataset with only the relevant regions
  datatemp=pred$prediction_reg %>% filter(DEN_REG %in% region) %>% arrange(week) 
  datatemp=datatemp %>% 
    mutate(low=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[1]),
           upp=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[2]))
  # if more than one region, then sum up the numbers
  if(length(region)>1) {
    datatemp=datatemp %>% group_by(week) %>% 
      summarise_at(vars("Obs",starts_with("sim")),funs(sum)) %>% ungroup()
    datatemp=datatemp %>% 
      mutate(mean=apply(as.matrix(datatemp %>% select(contains("sim"))),1,mean),
             median=apply(as.matrix(datatemp %>% select(contains("sim"))),1,median),
             sd=apply(as.matrix(datatemp %>% select(contains("sim"))),1,sd),
             low=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[1]),
             upp=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[2])) %>% 
      select(week,Obs,mean,sd,low,median,upp,everything())
  }

  rg=range((datatemp %>% select(low,upp,Obs)))

  dates=as.Date("2020-01-01")
  for (i in 2:max(pred$prediction$week)) {
    dates[i]=dates[(i-1)]+7
  }
  dates=format(dates,format="%d-%b")

  theme_set(theme_bw())
  pl=ggplot(datatemp,aes(week,mean)) + 
    geom_line(aes(y=mean),color="blue") + 
    geom_ribbon(aes(ymin=low,ymax=upp),alpha=.2) +
    scale_x_continuous("Week", breaks=1:length(dates), labels=dates) 
  if(exists("ylim",exArgs)){
    rg=exArgs$ylim; rg[2]=rg[2]*1.055
    pl+scale_y_continuous(limits=rg)
  }
  pl + geom_point(data=datatemp,aes(week,Obs),color="red") +
    geom_line(data=datatemp,aes(week,Obs),color="red") +
    theme(axis.text.x = element_text(color="black",size=13,angle=0,hjust=.5,vjust=.5,face="plain"),
          axis.text.y = element_text(color="black",size=13,angle=0,hjust=.5,vjust=.5,face="plain"),  
          axis.title.x = element_text(color="black",size=14,angle=0,hjust=.5,vjust=.5,face="plain"),
          axis.title.y = element_text(color="black",size=14,angle=90,hjust=.5,vjust=.5,face="plain")) +
    labs(y="All causes deaths",title=title) +
    theme(axis.line = element_line(colour = "black"),
          panel.grid.major = element_blank(),
          panel.grid.minor = element_blank(),
          panel.border = element_blank(),
          panel.background = element_blank(),
          plot.title = element_text(size=18)) +
    geom_point(shape=20,colour="red",size=3,aes(x=.92,y=rg[2]*1.015)) +
    annotate(geom="segment",x=.8,y=rg[2]*1.052,xend=1.04,yend=rg[2]*1.052,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.048,xend=1.04,yend=rg[2]*1.048,color="grey",size=3) +
    annotate(geom="segment",x=.8,y=rg[2]*1.05,xend=1.04,yend=rg[2]*1.05,color="blue",size=.8) +
    annotate(geom="text",x=1.15,y=rg[2]*1.05,label=paste0("Predicted deaths (mean and ",100*(q[2]-q[1]),"% interval)"),
             size=5,vjust=.5,hjust=0) +
    annotate(geom="text",x=1.15,y=rg[2]*1.015,label="Observed deaths",size=5,vjust=.5,hjust=0) + 
    geom_vline(xintercept=(10+5/7),linetype="dashed",color="black",size=.6) + 
    annotate(geom="text",x=(10+4.5/7),y=rg[2]*.85,label="Italy goes in \nlockdown on 9 March",size=3.8,vjust=0,hjust=1, family="Times", face="bold") + 
    # Add this to ensure multiple graphs in grid.arrange have the same scale
    # source: https://stackoverflow.com/questions/41768273/create-ggplots-with-the-same-scale-in-r
    coord_cartesian(ylim = c(rg[1],rg[2]*1.05)) 
}

and the model.run.R script:

# Code for the paper
# "Estimating weekly excess mortality at sub-national level in Italy during the COVID-19 pandemic"
# by Marta Blangiardo, Michela Cameletti, Monica Pirani, Gianni Corsetti, Marco Battaglini, Gianluca Baio

# Last version: 13/08/2020

library(dplyr)
library(INLA)

# Load extra functions
source("make.functions.R")

############################################################################
# 1. PREPARE DATA 
############################################################################
## Load the macro areas data
load("./Data/MacroRegions.Rdata")

# Now prepare the data
Sex = "Females" #other possible choice: Males
area = "NordOvest" #other possible choices: NordEst, Sud, Centro, Lombardia
data = make.data(macro.regions,area,Sex=Sex)
graph = paste0("./Graphs/",tolower(area),".graph")

############################################################################

############################################################################
# 2. RUN INLA & SAVE THE OUTPUT 
############################################################################

## Formula with temperature
formula = morti ~ 1 + 
  f(ID1,model="bym",graph=graph,scale.model=T,
    hyper=list(theta1=list(prior="loggamma",param=c(1,0.1)),theta2=list(prior="loggamma",param=c(1,0.1)))) +
  f(week,model="rw1",replicate=ID_prov,scale.model=TRUE,hyper=list(prec=list(prior="loggamma",param=c(1,0.1)))) +
  f(Anno,model="iid") +
  f(IDtemp,model="rw2",scale.model=TRUE,hyper=list(theta=list(prior="loggamma",param=c(1,0.1))))

# INLA SET UP
# Under Poisson uses default set up
control.family=inla.set.control.family.default()
# Defines the correct variable to offset the rates in the log-linear predictor
offset = data$E

m = inla(formula,
         data=data,
         E=offset,
         family="Poisson",
         control.family=control.family,
         verbose = TRUE,
         num.threads = round(parallel::detectCores()*.8),
         control.compute=list(config = TRUE)
)

file=paste0("Output/",Sex,"/output",area,".Rdata")
save(m,file=file)

############################################################################
# 3. GIVEN THE INLA OUTPUTS, 
# SIMULATE FROM THE POSTERIOR DISTRIBUTION & SAVE THE OUTPUT 
############################################################################
# 
make.posteriors(area,Sex)

############################################################################
# 4. GIVEN THE INLA OUTPUTS AND SIMULATIONS,
# COMPUTE THE PREDICTIONS FOR 2020 & SAVE THE OUTPUT 
############################################################################
make.predictions(area,Sex)

############################################################################
# 5. COMBINE THE OUTCOMES 
############################################################################
# Analysis of all output
# Loads all the results
require(dplyr)

pred.italy=list()
for (i in names(macro.regions)){
  file=paste0("Output/",Sex,"/predictions",i,".Rdata")
  load(file)
  pred.italy$SMR=bind_rows(pred.italy$SMR,res$SMR)
  pred.italy$prediction=bind_rows(pred.italy$prediction,res$prediction)
  rm(res)
}

####
# Removes the records that don't have values in the simulations (comuni that don't match the 2016-2019 data)
# also makes 'week' a numeric variable (not a factor)
pred.italy$SMR=pred.italy$SMR %>% filter(!is.na(Obs)) %>% mutate(week=as.numeric(as.character(week)))
pred.italy$prediction=pred.italy$prediction %>% filter(!is.na(Obs)) %>% mutate(week=as.numeric(as.character(week)))

# Aggregates by province
pred.italy$SMR_prov=(pred.italy$SMR %>% group_by(DEN_UTS,week) %>% arrange(COD_PROV,week) %>% slice(1) %>% 
                       select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week) %>% ungroup()) %>% 
  bind_cols(pred.italy$SMR %>% group_by(DEN_UTS,week) %>% arrange(week) %>% 
              summarise_at(vars("Obs",starts_with("sim")),funs(mean)) %>% ungroup()) %>% select(-c(week...8,DEN_UTS...7)) %>%
  rename(week=week...6,DEN_UTS=DEN_UTS...2)
pred.italy$SMR_prov=pred.italy$SMR_prov %>% 
  mutate(mean=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,mean,na.rm=TRUE),
         median=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,median,na.rm=TRUE),
         sd=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,sd,na.rm=TRUE),
         low=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,quantile,.025,na.rm=TRUE),
         upp=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,quantile,.975,na.rm=TRUE)) %>% 
  select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean,sd,low,median,upp,everything())

pred.italy$prediction_prov=(pred.italy$prediction %>% group_by(DEN_UTS,week) %>% arrange(COD_PROV,week) %>% slice(1) %>% 
                              select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week) %>% ungroup()) %>% 
  bind_cols(pred.italy$prediction %>% group_by(DEN_UTS,week) %>% arrange(COD_PROV,week) %>% 
              summarise_at(vars("Obs",starts_with("sim")),funs(sum)) %>% ungroup()) %>% 
  select(-c(week...8,DEN_UTS...7)) %>% rename(week=week...6,DEN_UTS=DEN_UTS...2)
pred.italy$prediction_prov=pred.italy$prediction_prov %>% 
  mutate(mean=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,mean,na.rm=TRUE),
         median=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,median,na.rm=TRUE),
         sd=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,sd,na.rm=TRUE),
         low=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,quantile,.025,na.rm=TRUE),
         upp=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,quantile,.975,na.rm=TRUE)) %>% 
  select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean,sd,low,median,upp,everything())

# Aggregates by region
pred.italy$prediction_reg=(pred.italy$prediction %>% group_by(DEN_REG,week) %>% arrange(COD_PROV,week) %>% slice(1) %>% 
                             select(COD_REG,DEN_REG,week) %>% ungroup()) %>% 
  bind_cols(pred.italy$prediction %>% group_by(DEN_REG,week) %>% arrange(COD_PROV,week) %>% 
              summarise_at(vars("Obs",starts_with("sim")),funs(sum)) %>% ungroup()) %>% 
  select(-c(week...5,DEN_REG...4)) %>% rename(week=week...3,DEN_REG=DEN_REG...2)
pred.italy$prediction_reg=pred.italy$prediction_reg %>% 
  mutate(mean=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,mean,na.rm=TRUE),
         median=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,median,na.rm=TRUE),
         sd=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,sd,na.rm=TRUE),
         low=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,quantile,.025,na.rm=TRUE),
         upp=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,quantile,.975,na.rm=TRUE)) %>% 
  select(DEN_REG,COD_REG,week,Obs,mean,sd,low,median,upp,everything())
####

# Creates excess deaths [= (est-obs)/obs]
xs=-1*sweep(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),
            1,pred.italy$prediction_prov$Obs,FUN ="-")
xs=sweep(as.matrix(xs),1,pred.italy$prediction_prov$Obs,FUN ="/")
colnames(xs)=stringr::str_replace(colnames(xs),"sim","xs")
pred.italy$prediction_prov=pred.italy$prediction_prov %>% bind_cols(as_tibble(xs)) 
pred.italy$prediction_prov=pred.italy$prediction_prov %>% mutate(
  mean.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,mean,na.rm=T),
  sd.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,sd,na.rm=T),
  low.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,quantile,.025,na.rm=T),
  upp.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,quantile,.975,na.rm=T)
) %>% select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean.excess,sd.excess,low.excess,upp.excess,everything())

# Creates excess deaths [= (est-obs)/obs]
xs=-1*sweep(as.matrix(pred.italy$prediction %>% select(contains("sim"))),
            1,pred.italy$prediction$Obs,FUN ="-")
xs=sweep(as.matrix(xs),1,pred.italy$prediction$Obs,FUN ="/")
colnames(xs)=stringr::str_replace(colnames(xs),"sim","xs")
pred.italy$prediction=pred.italy$prediction %>% bind_cols(as_tibble(xs)) 
pred.italy$prediction=pred.italy$prediction %>% 
  mutate(
    mean.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,mean,na.rm=T),
    sd.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,sd,na.rm=T),
    low.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,quantile,.025,na.rm=T),
    upp.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,quantile,.975,na.rm=T)
  ) %>% select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean.excess,sd.excess,low.excess,upp.excess,everything())

# Add population from official data and extrapolation at 2020
pop.tot=as_tibble(read.table("p20_att.txt",header = TRUE))
# Aggregate by age groups
pop.tot=pop.tot %>% rename(COD_PROVCOM=PRO_COM_ATT) %>% group_by(COD_PROVCOM) %>% mutate(Total.male=sum(n_m),Total.female=sum(n_f)) %>% 
  slice(1) %>% ungroup() %>% select(COD_PROVCOM,contains("Total"))
# Add population to predictions
pred.italy$prediction=pred.italy$prediction %>% left_join(pop.tot) %>% select(COD_PROVCOM,COMUNE,Total.male,Total.female,everything()) %>% 
  mutate(mean.excess=case_when(Obs==0~NA_real_,
                               TRUE~mean.excess),
         sd.excess=case_when(Obs==0~NA_real_,
                             TRUE~sd.excess),
         low.excess=case_when(Obs==0~NA_real_,
                              TRUE~low.excess),
         upp.excess=case_when(Obs==0~NA_real_,
                              TRUE~upp.excess)
  )
save(pred.italy,file=paste0("Output/",Sex,"/predItaly.Rdata"))

############################################################################
# 6. VISUALISE THE RESULTS
############################################################################
library(gridExtra)

file=paste0("Output/",Sex,"/predItaly.Rdata")
load(file)

# Plots the predicted deaths by comune & week

# Selects comuni with the highest percentage excess deaths
comuni2plot=as.character((pred.italy$prediction %>% filter(Total.male+Total.female>=100000) %>% arrange(desc(mean.excess)) %>% 
                            select(mean.excess,COMUNE,DEN_UTS,DEN_REG) %>% group_by(COMUNE) %>% slice(1) %>% ungroup() %>% 
                            arrange(desc(mean.excess)))$COMUNE)[1:9]
comuni2plot=c("Bergamo","Piacenza","Brescia","Parma","Milano","Cremona","Pesaro","Verona","Bologna")

plot.list=lapply(comuni2plot,function(x) vis.xs.rate(pred.italy,x, sex="Females"))
pdf("excess_rate_females.pdf",width=30,height=24)
do.call(grid.arrange,c(plot.list,ncol=3))
dev.off()

plot.list2=lapply(comuni2plot,function(x) vis.rate(pred.italy,x,sex="Females", multi=TRUE))
pdf("excess_mortality_females.pdf",width=30,height=24)
do.call(grid.arrange,c(plot.list2,ncol=3))
dev.off()

pdf("WeeklyTrendProv_Females.pdf")
lemon::grid_arrange_shared_legend(
  make.map(1),make.map(2),make.map(3),make.map(4),make.map(5),make.map(6),
  make.map(7),make.map(8),make.map(9),
  make.map(10),make.map(11),make.map(12),
  make.map(13),nrow=4,ncol=4,position="bottom" 
)
dev.off()

# Overall trends
# BUT MAKE TRENDS x10,000

vis.trends(pred.italy,"Lombardia")
regions=as.character((pred.italy$prediction_reg %>% group_by(DEN_REG) %>% slice(1) %>% ungroup())$DEN_REG)
plot.list=lapply(regions,function(x) vis.trends(pred.italy,x))
pdf("total_trends.pdf",width=38,height=30)
do.call(grid.arrange,c(plot.list,ncol=4))
dev.off()

relevant=list()
relevant[[1]]=c("Piemonte","Valle d'Aosta", "Liguria")
relevant[[2]]=c("Lombardia")
relevant[[3]]=c("Emilia-Romagna", "Trentino-Alto Adige","Veneto","Friuli Venezia Giulia")
relevant[[4]]=c("Abruzzo","Lazio","Marche","Molise","Toscana","Umbria")
relevant[[5]]=c("Basilicata","Calabria","Campania","Puglia","Sardegna","Sicilia")
names(relevant)=c("NordOvest","Lombardia","NordEst","Centro","Sud")

groups=list()
groups[[1]]=c("Piemonte","Valle d'Aosta", "Liguria")
groups[[2]]="Lombardia"
groups[[3]]=c("Emilia-Romagna", "Trentino-Alto Adige","Veneto","Friuli Venezia Giulia")
groups[[4]]=c("Lazio","Marche","Toscana","Umbria")
groups[[5]]=c("Abruzzo","Basilicata","Calabria","Campania","Molise","Puglia","Sardegna","Sicilia")
names(groups)=c("North-western Italy","Lombardia","North-eastern Italy","Central Italy","Southern Italy + Islands")

plot.list=lapply(groups, function(x) vis.trends(pred.italy,x,ylim=c(500,4000)))
pdf(paste0("total_trends_macroarea_",Sex,".pdf"),width=18,height=16)
do.call(grid.arrange,c(plot.list,ncol=2))
dev.off()

grid.arrange(arrangeGrob(plot.list,ncol=2,heights=c(4,1),widths=c(2,1)))

# This will tell me the posterior probability that Lombardia is outside the observed range in week 9 to complement the text
datatemp=males$prediction_reg %>% filter(DEN_REG %in% region) %>% arrange(week) 
  datatemp=datatemp %>% 
    mutate(low=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[1]),
           upp=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[2]))
  datatemp=datatemp %>% left_join(pred$prediction %>% filter(week==1) %>% group_by(DEN_REG) %>% summarise(pop=sum(Total.male)) %>% ungroup()) %>% 
    select(DEN_REG,pop,everything())
datatemp %>% mutate(prob=apply(as.matrix(datatemp %>% select(contains("sim"))),1,function(x) sum(x/pop*scale > Obs/pop*scale)/1000)) %>% select(prob,everything())

We get the following errors:

> # Code for the paper
> # "Estimating weekly excess mortality at sub-national level in Italy during the COVID-19 pandemic"
> # by Marta Blangiardo, Michela Cameletti, Monica Pirani, Gianni Corsetti, Marco Battaglini, Gianluca Baio
> 
> # Last version: 13/08/2020
> 
> library(dplyr)

Attaching package: ‘dplyr’

The following objects are masked from ‘package:stats’:

    filter, lag

The following objects are masked from ‘package:base’:

    intersect, setdiff, setequal, union

> library(INLA)
Loading required package: Matrix
Loading required package: sp
Loading required package: parallel
Loading required package: foreach
This is INLA_20.03.17 built 2020-11-16 11:05:43 UTC.
See www.r-inla.org/contact-us for how to get help.
> 
> # Load extra functions
> source("make.functions.R")
> 
> ############################################################################
> # 1. PREPARE DATA 
> ############################################################################
> ## Load the macro areas data
> load("./Data/MacroRegions.Rdata")
> 
> # Now prepare the data
> Sex = "Females" #other possible choice: Males
> area = "NordOvest" #other possible choices: NordEst, Sud, Centro, Lombardia
> data = make.data(macro.regions,area,Sex=Sex)
Warning messages:
1: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`. 
2: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`. 
3: The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated. 
> graph = paste0("./Graphs/",tolower(area),".graph")
> 
> ############################################################################
> 
> 
> ############################################################################
> # 2. RUN INLA & SAVE THE OUTPUT 
> ############################################################################
> 
> ## Formula with temperature
> formula = morti ~ 1 + 
+   f(ID1,model="bym",graph=graph,scale.model=T,
+     hyper=list(theta1=list(prior="loggamma",param=c(1,0.1)),theta2=list(prior="loggamma",param=c(1,0.1)))) +
+   f(week,model="rw1",replicate=ID_prov,scale.model=TRUE,hyper=list(prec=list(prior="loggamma",param=c(1,0.1)))) +
+   f(Anno,model="iid") +
+   f(IDtemp,model="rw2",scale.model=TRUE,hyper=list(theta=list(prior="loggamma",param=c(1,0.1))))
> 
> 
> # INLA SET UP
> # Under Poisson uses default set up
> control.family=inla.set.control.family.default()
> # Defines the correct variable to offset the rates in the log-linear predictor
> offset = data$E
> 
> m = inla(formula,
+          data=data,
+          E=offset,
+          family="Poisson",
+          control.family=control.family,
+          verbose = TRUE,
+          num.threads = round(parallel::detectCores()*.8),
+          control.compute=list(config = TRUE)
+ )

    hgid: 8b30e851fed2  date: Tue Mar 17 10:38:12 2020 +0300
Report bugs to <help@r-inla.org>
Process file[C:\Users\claud\AppData\Local\Temp\RtmpUJW5LZ\filedd04f2c14d6/Model.ini] threads[13] blas_threads[1]
inla_build...
    number of sections=[11]
    parse section=[0] name=[INLA.libR] type=[LIBR]
    inla_parse_libR...
        section[INLA.libR]
            R_HOME=[C:/PROGRA~1/R/R-40~1.3]
    parse section=[10] name=[INLA.Expert] type=[EXPERT]
    inla_parse_expert...
        section[INLA.Expert]
            disable.gaussian.check=[0]
            cpo.manual=[0]
            jp.file=[(null)]
            jp.model=[(null)]
    parse section=[1] name=[INLA.Model] type=[PROBLEM]
    inla_parse_problem...
        name=[INLA.Model]
        R-INLA tag=[Version_20.03.17]
        Build tag=[Version_20.03.17]
        openmp.strategy=[default]
        pardiso-library installed and working? = [no]
        smtp = [taucs]
        strategy = [default]
    store results in directory=[C:\Users\claud\AppData\Local\Temp\RtmpUJW5LZ\filedd04f2c14d6/results.files]
        output:
            cpo=[0]
            po=[0]
            dic=[0]
            kld=[1]
            mlik=[1]
            q=[0]
            graph=[0]
            gdensity=[0]
            hyperparameters=[1]
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[3] name=[Predictor] type=[PREDICTOR]
    inla_parse_predictor ...
        section=[Predictor]
        dir=[predictor]
        PRIOR->name=[loggamma]
        hyperid=[53001|Predictor]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 1e-005]
        initialise log_precision[12]
        fixed=[1]
        user.scale=[1]
        n=[139400]
        m=[0]
        ndata=[139400]
        compute=[0]
        read offsets from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 0/139400  (idx,y) = (0, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 1/139400  (idx,y) = (1, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 2/139400  (idx,y) = (2, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 3/139400  (idx,y) = (3, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 4/139400  (idx,y) = (4, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 5/139400  (idx,y) = (5, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 6/139400  (idx,y) = (6, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 7/139400  (idx,y) = (7, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 8/139400  (idx,y) = (8, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0f1c7e26] 9/139400  (idx,y) = (9, 0)
        Aext=[(null)]
        AextPrecision=[1e+008]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[2] name=[INLA.Data1] type=[DATA]
    inla_parse_data [section 1]...
        tag=[INLA.Data1]
        family=[POISSON]
        likelihood=[POISSON]
        file->name=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd038056756]
        file->name=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0363b6e62]
        file->name=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04dc63155]
        read n=[418200] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd038056756]
        mdata.nattributes = 0
            0/139400  (idx,a,y,d) = (0, 0.61143, 1, 1)
            1/139400  (idx,a,y,d) = (1, 0.61143, 0, 1)
            2/139400  (idx,a,y,d) = (2, 0.61143, 0, 1)
            3/139400  (idx,a,y,d) = (3, 0.61143, 0, 1)
            4/139400  (idx,a,y,d) = (4, 0.61143, 2, 1)
            5/139400  (idx,a,y,d) = (5, 0.61143, 0, 1)
            6/139400  (idx,a,y,d) = (6, 0.61143, 0, 1)
            7/139400  (idx,a,y,d) = (7, 0.61143, 0, 1)
            8/139400  (idx,a,y,d) = (8, 0.61143, 2, 1)
            9/139400  (idx,a,y,d) = (9, 0.61143, 0, 1)
        likelihood.variant=[0]
        Link model   [LOG]
        Link order   [-1]
        Link variant [-1]
        Link ntheta  [0]
        mix.use[0]
    parse section=[5] name=[ID1] type=[FFIELD]
    inla_parse_ffield...
        section=[ID1]
        dir=[random.effect00000001]
        model=[bym]
        PRIOR0->name=[loggamma]
        hyperid=[10001|ID1]
        PRIOR0->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR0->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR0->PARAMETERS0=[1, 0.1]
        PRIOR1->name=[loggamma]
        hyperid=[10002|ID1]
        PRIOR1->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR1->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR1->PARAMETERS1=[1, 0.1]
        correct=[-1]
        constr=[0]
        diagonal=[1.01511e-005]
        id.names=<not present>
        compute=[1]
        nrep=[1]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 0/139400  (idx,y) = (0, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 1/139400  (idx,y) = (1, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 2/139400  (idx,y) = (2, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 3/139400  (idx,y) = (3, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 4/139400  (idx,y) = (4, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 5/139400  (idx,y) = (5, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 6/139400  (idx,y) = (6, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 7/139400  (idx,y) = (7, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 8/139400  (idx,y) = (8, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd059992fae] 9/139400  (idx,y) = (9, 0)
        read graph from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd038f13d75]
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd01a7937]
            nlocations=[2050]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
            locations[4]=[5]
            locations[5]=[6]
            locations[6]=[7]
            locations[7]=[8]
            locations[8]=[9]
            locations[9]=[10]
        initialise log_precision (iid component)[4]
        fixed=[0]
        initialise log_precision (spatial component)[4]
        fixed=[0]
        adjust.for.con.comp[1]
        scale.model[1]
        connected component[0] size[2050] scale[0.522871]
        scale.model: prec_scale[0.522871]
        read extra constraint from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd052095832]
        Constraint[0]
            A[2050] = 1.000000
            A[2051] = 1.000000
            A[2052] = 1.000000
            A[2053] = 1.000000
            A[2054] = 1.000000
            A[2055] = 1.000000
            A[2056] = 1.000000
            A[2057] = 1.000000
            A[2058] = 1.000000
            A[2059] = 1.000000
            A[2060] = 1.000000
            e[0] = 0.000000
        rank-deficiency is *defined* [1]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[6] name=[week] type=[FFIELD]
    inla_parse_ffield...
        section=[week]
        dir=[random.effect00000002]
        model=[rw1]
        PRIOR->name=[loggamma]
        hyperid=[4001|week]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 0.1]
        correct=[-1]
        constr=[1]
        diagonal=[1.01511e-005]
        id.names=<not present>
        compute=[1]
        nrep=[19]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 0/139400  (idx,y) = (0, 255)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 1/139400  (idx,y) = (1, 256)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 2/139400  (idx,y) = (2, 257)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 3/139400  (idx,y) = (3, 258)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 4/139400  (idx,y) = (4, 259)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 5/139400  (idx,y) = (5, 260)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 6/139400  (idx,y) = (6, 261)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 7/139400  (idx,y) = (7, 262)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 8/139400  (idx,y) = (8, 263)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd03b3f5594] 9/139400  (idx,y) = (9, 264)
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd02a54708]
            nlocations=[17]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
            locations[4]=[5]
            locations[5]=[6]
            locations[6]=[7]
            locations[7]=[8]
            locations[8]=[9]
            locations[9]=[10]
        cyclic=[0]
        initialise log_precision[4]
        fixed=[0]
        scale.model[1]
        scale.model: prec_scale[2.41758]
        computed/guessed rank-deficiency = [1]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[7] name=[Anno] type=[FFIELD]
    inla_parse_ffield...
        section=[Anno]
        dir=[random.effect00000003]
        model=[iid]
        PRIOR->name=[loggamma]
        hyperid=[1001|Anno]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 5e-005]
        correct=[-1]
        constr=[0]
        diagonal=[0]
        id.names=<not present>
        compute=[1]
        nrep=[1]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 0/139400  (idx,y) = (0, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 1/139400  (idx,y) = (1, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 2/139400  (idx,y) = (2, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 3/139400  (idx,y) = (3, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 4/139400  (idx,y) = (4, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 5/139400  (idx,y) = (5, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 6/139400  (idx,y) = (6, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 7/139400  (idx,y) = (7, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 8/139400  (idx,y) = (8, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd04c213576] 9/139400  (idx,y) = (9, 0)
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd06c4413fa]
            nlocations=[4]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
        cyclic=[0]
        initialise log_precision[4]
        fixed=[0]
        computed/guessed rank-deficiency = [0]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[8] name=[IDtemp] type=[FFIELD]
    inla_parse_ffield...
        section=[IDtemp]
        dir=[random.effect00000004]
        model=[rw2]
        PRIOR->name=[loggamma]
        hyperid=[5001|IDtemp]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 0.1]
        correct=[-1]
        constr=[1]
        diagonal=[1.01511e-005]
        id.names=<not present>
        compute=[1]
        nrep=[1]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 0/139400  (idx,y) = (0, 17)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 1/139400  (idx,y) = (1, 28)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 2/139400  (idx,y) = (2, 10)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 3/139400  (idx,y) = (3, 28)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 4/139400  (idx,y) = (4, 49)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 5/139400  (idx,y) = (5, 31)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 6/139400  (idx,y) = (6, 27)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 7/139400  (idx,y) = (7, 43)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 8/139400  (idx,y) = (8, 40)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0258f583] 9/139400  (idx,y) = (9, 22)
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd02df0489b]
            nlocations=[100]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
            locations[4]=[5]
            locations[5]=[6]
            locations[6]=[7]
            locations[7]=[8]
            locations[8]=[9]
            locations[9]=[10]
        cyclic=[0]
        initialise log_precision[4]
        fixed=[0]
        scale.model[1]
        scale.model: prec_scale[1678.49]
        computed/guessed rank-deficiency = [2]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    section=[4] name=[(Intercept)] type=[LINEAR]
    inla_parse_linear...
        section[(Intercept)]
        dir=[fixed.effect00000001]
        file for covariates=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 0/139400  (idx,y) = (0, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 1/139400  (idx,y) = (1, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 2/139400  (idx,y) = (2, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 3/139400  (idx,y) = (3, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 4/139400  (idx,y) = (4, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 5/139400  (idx,y) = (5, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 6/139400  (idx,y) = (6, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 7/139400  (idx,y) = (7, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 8/139400  (idx,y) = (8, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpUJW5LZ/filedd04f2c14d6/data.files/filedd0b31c67] 9/139400  (idx,y) = (9, 1)
        prior mean=[0]
        prior precision=[0]
        compute=[1]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    Index table: number of entries[6], total length[143928]
        tag                            start-index     length
        Predictor                               0     139400
        ID1                                139400       4100
        week                               143500        323
        Anno                               143823          4
        IDtemp                             143827        100
        (Intercept)                        143927          1
    parse section=[9] name=[INLA.Parameters] type=[INLA]
    inla_parse_INLA...
        section[INLA.Parameters]
            lincomb.derived.only = [Yes]
            lincomb.derived.correlation.matrix = [No]
        global_node.factor = 2.000
        global_node.degree = 2147483647
        reordering = -1
Contents of ai_param 00000000066D6250
    Optimiser: DEFAULT METHOD
        Option for GSL-BFGS2: tol  = 0.1
        Option for GSL-BFGS2: step_size = 1
        Option for GSL-BFGS2: epsx = 0.005
        Option for GSL-BFGS2: epsf = 0.000353553
        Option for GSL-BFGS2: epsg = 0.005
        Restart: 0
        Mode known: No
    Gaussian approximation:
        tolerance_func = 0.0005
        tolerance_step = 0.0005
        optpar_fp = 0
        optpar_nr_step_factor = -0.1
    Gaussian data: No
    Strategy:   Use a mean-skew corrected Gaussian by fitting a Skew-Normal
    Fast mode:  On
    Use linear approximation to log(|Q +c|)? Yes
        Method:  Compute the derivative exact
    Parameters for improved approximations
        Number of points evaluate:   9
        Step length to compute derivatives numerically:  0.000100002
        Stencil to compute derivatives numerically:  5
        Cutoff value to construct local neigborhood:     0.0001
    Log calculations:    On
    Log calculated marginal for the hyperparameters:     On
    Integration strategy:    Automatic (GRID for dim(theta)=1 and 2 and otherwise CCD)
        f0 (CCD only):   1.100000
        dz (GRID only):  0.750000
        Adjust weights (GRID only):  On
        Difference in log-density limit (GRID only):     6.000000
        Skip configurations with (presumed) small density (GRID only):   On
    Gradient is computed using Central difference with step-length 0.010000
    Hessian is computed using Central difference with step-length 0.100000
    Hessian matrix is forced to be a diagonal matrix? [No]
    Compute effective number of parameters? [Yes]
    Perform a Monte Carlo error-test? [No]
    Interpolator [Auto]
    CPO required diff in log-density [3]
    Stupid search mode:
        Status     [On]
        Max iter   [1000]
        Factor     [1.05]
    Numerical integration of hyperparameters:
        Maximum number of function evaluations [100000]
        Relative error ....................... [1e-005]
        Absolute error ....................... [1e-006]
    To stabilise the numerical optimisation:
        Minimum value of the -Hessian [-1.#INF]
        Strategy for the linear term [Keep]
    CPO manual calculation[No]
    Laplace-correction is Disabled.

inla_build: check for unused entries in[C:\Users\claud\AppData\Local\Temp\RtmpUJW5LZ\filedd04f2c14d6/Model.ini]
inla_INLA...
    Strategy = [DEFAULT]
    Sparse-matrix library... = [taucs]
    OpenMP strategy......... = [huge]
    Density-strategy........ = [Low]
    Size of graph........... = [143928]
    Number of constraints... = [21]
    Found optimal reordering=[amdc] nnz(L)=[1311252] and use_global_nodes(user)=[no]
    List of hyperparameters: 
        theta[0] = [Log precision for ID1 (idd component)]
        theta[1] = [Log precision for ID1 (spatial component)]
        theta[2] = [Log precision for week]
        theta[3] = [Log precision for Anno]
        theta[4] = [Log precision for IDtemp]
Optimise using DEFAULT METHOD
max.logdens= -96250.7507 fn= 1 theta= 4.0000 4.0000 4.0000 3.9900 4.0000  range=[-0.57 0.87]
max.logdens= -96250.3340 fn= 2 theta= 4.0000 4.0000 4.0100 4.0000 4.0000  range=[-0.57 0.87]
max.logdens= -96250.2972 fn= 4 theta= 4.0100 4.0000 4.0000 4.0000 4.0000  range=[-0.57 0.87]
max.logdens= -96219.2278 fn= 11 theta= 4.7114 4.2403 4.6589 4.0302 3.9658  range=[-0.55 0.76]
max.logdens= -96219.0720 fn= 12 theta= 4.7114 4.2303 4.6589 4.0302 3.9658  range=[-0.55 0.76]
max.logdens= -96218.9925 fn= 14 theta= 4.7114 4.2403 4.6689 4.0302 3.9658  range=[-0.55 0.76]
Iter=1 |grad|=17.4 |x-x.old|=0.447 |f-f.old|=31.5 
max.logdens= -96215.9208 fn= 23 theta= 4.6077 4.0979 4.9817 4.0792 3.9266  range=[-0.55 0.79]
max.logdens= -96215.8882 fn= 24 theta= 4.6077 4.0979 4.9917 4.0792 3.9266  range=[-0.55 0.79]
max.logdens= -96215.8802 fn= 25 theta= 4.6177 4.0979 4.9817 4.0792 3.9266  range=[-0.55 0.79]
max.logdens= -96215.8789 fn= 30 theta= 4.6077 4.0979 4.9817 4.0892 3.9266  range=[-0.55 0.79]
Iter=2 |grad|=4.23 |x-x.old|=0.167 |f-f.old|=3.31 
max.logdens= -96215.6120 fn= 35 theta= 4.6854 4.1372 5.0459 4.1556 3.8729  range=[-0.55 0.77]
max.logdens= -96215.5991 fn= 36 theta= 4.6954 4.1372 5.0459 4.1556 3.8729  range=[-0.55 0.77]
max.logdens= -96215.5594 fn= 38 theta= 4.6854 4.1272 5.0459 4.1556 3.8729  range=[-0.55 0.78]
max.logdens= -96215.5556 fn= 44 theta= 4.6854 4.1372 5.0459 4.1656 3.8729  range=[-0.55 0.77]
Iter=3 |grad|=3.78 |x-x.old|=0.064 |f-f.old|=0.309 
max.logdens= -96215.0678 fn= 46 theta= 4.6841 4.1003 5.0745 4.3431 3.7536  range=[-0.55 0.78]
max.logdens= -96215.0658 fn= 48 theta= 4.6841 4.0903 5.0745 4.3431 3.7536  range=[-0.55 0.78]
max.logdens= -96215.0613 fn= 49 theta= 4.6841 4.1003 5.0745 4.3431 3.7436  range=[-0.55 0.78]
max.logdens= -96215.0510 fn= 54 theta= 4.6841 4.1003 5.0645 4.3431 3.7536  range=[-0.55 0.78]
max.logdens= -96215.0462 fn= 56 theta= 4.6841 4.1003 5.0745 4.3531 3.7536  range=[-0.55 0.78]
max.logdens= -96214.7201 fn= 57 theta= 4.6827 4.0607 5.1051 4.5438 3.6259  range=[-0.55 0.78]
max.logdens= -96214.7195 fn= 61 theta= 4.6827 4.0607 5.1051 4.5438 3.6159  range=[-0.55 0.78]
max.logdens= -96214.7016 fn= 64 theta= 4.6827 4.0707 5.1051 4.5438 3.6259  range=[-0.55 0.78]
max.logdens= -96214.6982 fn= 65 theta= 4.6827 4.0607 5.1051 4.5538 3.6259  range=[-0.55 0.78]
max.logdens= -96214.6894 fn= 67 theta= 4.6827 4.0607 5.0951 4.5438 3.6259  range=[-0.55 0.78]
max.logdens= -96214.5806 fn= 69 theta= 4.6810 4.0110 5.1435 4.7952 3.4659  range=[-0.55 0.79]
max.logdens= -96214.5679 fn= 70 theta= 4.6910 4.0110 5.1435 4.7952 3.4659  range=[-0.55 0.79]
max.logdens= -96214.5625 fn= 72 theta= 4.6810 4.0110 5.1435 4.7952 3.4759  range=[-0.55 0.79]
max.logdens= -96214.5277 fn= 76 theta= 4.6810 4.0210 5.1435 4.7952 3.4659  range=[-0.55 0.79]
max.logdens= -96214.5220 fn= 77 theta= 4.6810 4.0110 5.1335 4.7952 3.4659  range=[-0.55 0.79]
Iter=4 |grad|=6.94 |x-x.old|=0.346 |f-f.old|=1.03 
max.logdens= -96213.2111 fn= 80 theta= 4.6921 4.0598 5.0180 5.4982 3.1000  range=[-0.55 0.78]
max.logdens= -96213.1908 fn= 84 theta= 4.6921 4.0598 5.0180 5.4982 3.1100  range=[-0.55 0.78]
max.logdens= -96213.1216 fn= 92 theta= 4.6954 4.0742 4.9809 5.7061 2.9918  range=[-0.54 0.78]
max.logdens= -96213.1085 fn= 94 theta= 4.6954 4.0742 4.9909 5.7061 2.9918  range=[-0.54 0.78]
max.logdens= -96213.0983 fn= 96 theta= 4.6954 4.0742 4.9809 5.7061 3.0018  range=[-0.54 0.78]
Iter=5 |grad|=3.63 |x-x.old|=0.466 |f-f.old|=1.46 
max.logdens= -96212.6772 fn= 104 theta= 4.6090 4.1596 5.0397 6.1392 2.9764  range=[-0.54 0.77]
max.logdens= -96212.6529 fn= 105 theta= 4.6090 4.1596 5.0397 6.1392 2.9864  range=[-0.54 0.77]
Iter=6 |grad|=3.27 |x-x.old|=0.203 |f-f.old|=0.444 
max.logdens= -96212.0271 fn= 115 theta= 4.6394 4.1127 5.0223 6.5846 3.1337  range=[-0.54 0.78]
max.logdens= -96212.0082 fn= 116 theta= 4.6394 4.1127 5.0223 6.5846 3.1437  range=[-0.54 0.78]
max.logdens= -96212.0069 fn= 133 theta= 4.6699 4.0758 5.0050 7.0300 3.2910  range=[-0.54 0.78]
max.logdens= -96211.9308 fn= 137 theta= 4.6556 4.0878 5.0131 6.8214 3.2173  range=[-0.54 0.78]
max.logdens= -96211.9275 fn= 140 theta= 4.6556 4.0878 5.0131 6.8114 3.2173  range=[-0.54 0.78]
max.logdens= -96211.9172 fn= 142 theta= 4.6556 4.0878 5.0131 6.8214 3.2273  range=[-0.54 0.78]
Iter=7 |grad|=1.97 |x-x.old|=0.326 |f-f.old|=0.746 
max.logdens= -96211.7961 fn= 149 theta= 4.6767 4.1390 5.0375 6.8401 3.3465  range=[-0.54 0.77]
max.logdens= -96211.7772 fn= 151 theta= 4.6767 4.1390 5.0375 6.8401 3.3565  range=[-0.54 0.77]
max.logdens= -96211.7699 fn= 155 theta= 4.6767 4.1290 5.0375 6.8401 3.3465  range=[-0.54 0.77]
Iter=8 |grad|=2.66 |x-x.old|=0.0643 |f-f.old|=0.135 
max.logdens= -96211.6479 fn= 160 theta= 4.6400 4.1280 5.0413 6.8377 3.5261  range=[-0.55 0.78]
max.logdens= -96211.6469 fn= 161 theta= 4.6500 4.1280 5.0413 6.8377 3.5261  range=[-0.54 0.78]
max.logdens= -96211.6408 fn= 163 theta= 4.6400 4.1180 5.0413 6.8377 3.5261  range=[-0.55 0.78]
max.logdens= -96211.6406 fn= 181 theta= 4.6399 4.1180 5.0413 6.8377 3.5265  range=[-0.55 0.78]
max.logdens= -96211.6405 fn= 192 theta= 4.6399 4.1180 5.0413 6.8377 3.5267  range=[-0.55 0.78]
max.logdens= -96211.6403 fn= 218 theta= 4.6399 4.1180 5.0413 6.8377 3.5267  range=[-0.55 0.78]
Iter=9 |grad|=1.23 |x-x.old|=0.0824 |f-f.old|=0.149 Reached numerical limit!
Optim: Number of function evaluations = 224
Compute the Hessian using central differences and step_size[0.1]. Matrix-type [dense]
max.logdens= -96211.6003 fn= 225 theta= 4.6399 4.1280 5.0413 6.7377 3.5267  range=[-0.55 0.78]
Mode not sufficient accurate; switch to a stupid local search strategy.
max.logdens= -96211.5847 fn= 236 theta= 4.6399 4.1280 5.0413 6.7377 3.6317  range=[-0.55 0.78]
max.logdens= -96211.5691 fn= 249 theta= 4.6399 4.1280 5.0413 6.6275 3.6317  range=[-0.55 0.78]

max.logdens= -96211.5690 fn= 309 theta= 4.6399 4.1280 5.0413 6.6275 3.6317  range=[-0.55 0.78]
    47.923824    15.991226    -0.143604    -0.050339    -0.088881
    15.991226    45.140827    -0.072920     0.007869    -0.006437
    -0.143604    -0.072920    44.287863    -0.014403    -0.067313
    -0.050339     0.007869    -0.014403     2.771321     0.007341
    -0.088881    -0.006437    -0.067313     0.007341     4.990696
Eigenvectors of the Hessian
    0.737101    0.675764    0.004130    -0.002371   0.001357
    0.675728    -0.737107   0.008030    0.000781    -0.000698
    -0.008473   0.003123    0.999958    -0.001721   0.000345
    -0.000530   -0.001439   -0.000351   -0.003241   0.999994
    -0.001203   -0.002179   -0.001724   -0.999990   -0.003245
Eigenvalues of the Hessian
    62.585394
    30.480715
    44.286805
    4.990398
    2.771218
StDev/Correlation matrix (scaled inverse Hessian)
     0.153835    -0.343819     0.002756     0.004901     0.005966
                  0.158502     0.000585    -0.002345    -0.001640
                               0.150267     0.001303     0.004542
                                            0.600708    -0.001939
                                                         0.447644
Compute corrected stdev for theta[0]: negative 0.961023  positive 1.053797
Compute corrected stdev for theta[1]: negative 0.929659  positive 1.118742
Compute corrected stdev for theta[2]: negative 1.121293  positive 0.923264
Compute corrected stdev for theta[3]: negative 0.908362  positive 1.169934
Compute corrected stdev for theta[4]: negative 1.279785  positive 0.903255
    config  0/27=[ -1.057 1.231 1.016 -0.999 0.994] log(rel.dens)=-2.948, [8] accept, compute, 1613.60s
    config  1/27=[ -0.000 -0.000 0.000 0.000 2.222] log(rel.dens)=-3.136, [4] accept, compute, 1620.98s
    config  2/27=[ -1.057 -1.023 -1.233 -0.999 0.994] log(rel.dens)=-2.943, [5] accept, compute, 1619.70s
    config  3/27=[ -0.000 -0.000 -0.000 2.878 -0.000] log(rel.dens)=-2.897, [3] accept, compute, 1621.29s
    config  4/27=[ -1.057 -1.023 1.016 -0.999 -1.408] log(rel.dens)=-3.264, [6] accept, compute, 1623.52s
    config  5/27=[ 0.000 0.000 0.000 0.000 0.000] log(rel.dens)=-0.008, [0] accept, compute, 1625.32s
    config  6/27=[ 0.000 2.752 0.000 0.000 -0.000] log(rel.dens)=-3.202, [1] accept, compute, 1625.23s
    config  7/27=[ -1.057 1.231 -1.233 -0.999 -1.408] log(rel.dens)=-2.962, [7] accept, compute, 1625.54s
    config  8/27=[ 0.000 -0.000 2.271 0.000 -0.000] log(rel.dens)=-2.975, [2] accept, compute, 1625.79s
    config  9/27=[ 1.159 -1.023 1.016 -0.999 0.994] log(rel.dens)=-2.945, [10] accept, compute, 1627.00s
    config 10/27=[ 1.159 -1.023 -1.233 -0.999 -1.408] log(rel.dens)=-2.964, [9] accept, compute, 1631.35s
    config 11/27=[ 1.159 1.231 1.016 -0.999 -1.408] log(rel.dens)=-2.901, [12] accept, compute, 1590.14s
    config 12/27=[ 1.159 1.231 -1.233 -0.999 0.994] log(rel.dens)=-2.601, [11] accept, compute, 1606.86s
    config 13/27=[ -1.057 1.231 1.016 1.287 -1.408] log(rel.dens)=-3.150, [8] accept, compute, 1692.99s
    config 14/27=[ -0.000 -0.000 0.000 -0.000 -3.148] log(rel.dens)=-2.549, [4] accept, compute, 1694.83s
    config 15/27=[ 0.000 -0.000 -2.758 -0.000 0.000] log(rel.dens)=-3.161, [2] accept, compute, 1692.19s
    config 16/27=[ -1.057 -1.023 -1.233 1.287 -1.408] log(rel.dens)=-3.200, [5] accept, compute, 1697.64s
    config 17/27=[ -1.057 1.231 -1.233 1.287 0.994] log(rel.dens)=-2.880, [7] accept, compute, 1694.37s
    config 18/27=[ -0.000 0.000 -0.000 -2.234 0.000] log(rel.dens)=-3.117, [3] accept, compute, 1698.80s
    config 19/27=[ -0.000 -2.287 0.000 -0.000 -0.000] log(rel.dens)=-2.989, [1] accept, compute, 1697.55s
    config 20/27=[ 1.159 -1.023 1.016 1.287 -1.408] log(rel.dens)=-3.167, [10] accept, compute, 1696.72s
    config 21/27=[ -1.057 -1.023 1.016 1.287 0.994] log(rel.dens)=-3.175, [6] accept, compute, 1699.69s
    config 22/27=[ 2.592 -0.000 0.000 -0.000 0.000] log(rel.dens)=-3.032, [0] accept, compute, 1699.21s
    config 23/27=[ 1.159 -1.023 -1.233 1.287 0.994] log(rel.dens)=-2.875, [9] accept, compute, 1694.06s
    config 24/27=[ 1.159 1.231 1.016 1.287 0.994] log(rel.dens)=-2.811, [12] accept, compute, 1677.88s
    config 25/27=[ 1.159 1.231 -1.233 1.287 -1.408] log(rel.dens)=-2.789, [11] accept, compute, 1659.33s
    config 26/27=[ -2.364 -0.000 -0.000 0.000 0.000] log(rel.dens)=-3.052, [0] accept, compute, 304.53s
Combine the densities with relative weights:
    config  0/27=[ 0.000 0.000 0.000 0.000 0.000] weight = 1.000  neff = 664.77
    config  1/27=[ 2.592 -0.000 0.000 -0.000 0.000] weight = 0.183  neff = 598.52
    config  2/27=[ -2.364 -0.000 -0.000 0.000 0.000] weight = 0.180  neff = 730.00
    config  3/27=[ 0.000 2.752 0.000 0.000 -0.000] weight = 0.155  neff = 667.58
    config  4/27=[ -0.000 -2.287 0.000 -0.000 -0.000] weight = 0.191  neff = 686.72
    config  5/27=[ 0.000 -0.000 2.271 0.000 -0.000] weight = 0.194  neff = 648.87
    config  6/27=[ 0.000 -0.000 -2.758 -0.000 0.000] weight = 0.161  neff = 685.76
    config  7/27=[ -0.000 -0.000 -0.000 2.878 -0.000] weight = 0.210  neff = 670.11
    config  8/27=[ -0.000 0.000 -0.000 -2.234 0.000] weight = 0.168  neff = 661.63
    config  9/27=[ -0.000 -0.000 0.000 0.000 2.222] weight = 0.165  neff = 664.32
    config 10/27=[ -0.000 -0.000 0.000 -0.000 -3.148] weight = 0.297  neff = 665.13
    config 11/27=[ -1.057 -1.023 -1.233 -0.999 0.994] weight = 0.200  neff = 708.46
    config 12/27=[ -1.057 -1.023 -1.233 1.287 -1.408] weight = 0.155  neff = 712.61
    config 13/27=[ -1.057 -1.023 1.016 -0.999 -1.408] weight = 0.145  neff = 692.43
    config 14/27=[ -1.057 -1.023 1.016 1.287 0.994] weight = 0.159  neff = 695.77
    config 15/27=[ -1.057 1.231 -1.233 -0.999 -1.408] weight = 0.197  neff = 698.08
    config 16/27=[ -1.057 1.231 -1.233 1.287 0.994] weight = 0.214  neff = 701.40
    config 17/27=[ -1.057 1.231 1.016 -0.999 0.994] weight = 0.199  neff = 681.31
    config 18/27=[ -1.057 1.231 1.016 1.287 -1.408] weight = 0.163  neff = 685.21
    config 19/27=[ 1.159 -1.023 -1.233 -0.999 -1.408] weight = 0.196  neff = 648.93
    config 20/27=[ 1.159 -1.023 -1.233 1.287 0.994] weight = 0.214  neff = 652.26
    config 21/27=[ 1.159 -1.023 1.016 -0.999 0.994] weight = 0.200  neff = 632.13
    config 22/27=[ 1.159 -1.023 1.016 1.287 -1.408] weight = 0.160  neff = 636.22
    config 23/27=[ 1.159 1.231 -1.233 -0.999 0.994] weight = 0.282  neff = 639.72
    config 24/27=[ 1.159 1.231 -1.233 1.287 -1.408] weight = 0.234  neff = 643.58
    config 25/27=[ 1.159 1.231 1.016 -0.999 -1.408] weight = 0.209  neff = 623.55
    config 26/27=[ 1.159 1.231 1.016 1.287 0.994] weight = 0.229  neff = 626.86
Done.
Expected effective number of parameters: 665.603(28.161),  eqv.#replicates: 209.434
Marginal likelihood: Integration -96214.111984 Gaussian-approx -96213.965819
Compute the marginal for each of the 5 hyperparameters
Interpolation method: Auto
    Compute the marginal for theta[0] to theta[4] using numerical integration...
    Compute the marginal for theta[0] to theta[4] using numerical integration... Done.
Compute the marginal for the hyperparameters... done.
Store results in directory[C:\Users\claud\AppData\Local\Temp\RtmpUJW5LZ\filedd04f2c14d6/results.files]

Wall-clock time used on [C:\Users\claud\AppData\Local\Temp\RtmpUJW5LZ\filedd04f2c14d6/Model.ini]
    Preparations    :   0.211 seconds
    Approx inference: 3882.358 seconds [0.3|0.0|5.3|94.3|0.1]%
    Output          :   0.323 seconds
    ---------------------------------
    Total           : 3882.892 seconds

> 
> file=paste0("Output/",Sex,"/output",area,".Rdata")
> save(m,file=file)
Warning message:
In save(m, file = file) : 'package:stats' may not be available when loading
> 
> ############################################################################
> # 3. GIVEN THE INLA OUTPUTS, 
> # SIMULATE FROM THE POSTERIOR DISTRIBUTION & SAVE THE OUTPUT 
> ############################################################################
> # 
> make.posteriors(area,Sex)
>   
> ############################################################################
> # 4. GIVEN THE INLA OUTPUTS AND SIMULATIONS,
> # COMPUTE THE PREDICTIONS FOR 2020 & SAVE THE OUTPUT 
> ############################################################################
> make.predictions(area,Sex)
Joining, by = c("COD_PROVCOM", "week")
Joining, by = c("COD_PROVCOM", "COMUNE", "COD_PROV", "DEN_UTS", "SIGLA", "DEN_REG", "COD_REG", "week", "Anno", "sex", "ID_Ita", "Vuln2011", "p.africa", "p.america", "p.asia", "p.europa", "age.group", "temperature", "temp_grp", "IDtemp")
There were 50 or more warnings (use warnings() to see the first 50)
>   
> ############################################################################
> # 5. COMBINE THE OUTCOMES 
> ############################################################################
> # Analysis of all output
> # Loads all the results
> require(dplyr)
> 
> pred.italy=list()
> for (i in names(macro.regions)){
+   file=paste0("Output/",Sex,"/predictions",i,".Rdata")
+   load(file)
+   pred.italy$SMR=bind_rows(pred.italy$SMR,res$SMR)
+   pred.italy$prediction=bind_rows(pred.italy$prediction,res$prediction)
+   rm(res)
+ }
Error in readChar(con, 5L, useBytes = TRUE) : cannot open the connection
In addition: Warning message:
In readChar(con, 5L, useBytes = TRUE) :
  cannot open compressed file 'Output/Females/predictionsLombardia.Rdata', probable reason 'No such file or directory'
> 
> ####
> # Removes the records that don't have values in the simulations (comuni that don't match the 2016-2019 data)
> # also makes 'week' a numeric variable (not a factor)
> pred.italy$SMR=pred.italy$SMR %>% filter(!is.na(Obs)) %>% mutate(week=as.numeric(as.character(week)))
> pred.italy$prediction=pred.italy$prediction %>% filter(!is.na(Obs)) %>% mutate(week=as.numeric(as.character(week)))
> 
> # Aggregates by province
> pred.italy$SMR_prov=(pred.italy$SMR %>% group_by(DEN_UTS,week) %>% arrange(COD_PROV,week) %>% slice(1) %>% 
+                        select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week) %>% ungroup()) %>% 
+   bind_cols(pred.italy$SMR %>% group_by(DEN_UTS,week) %>% arrange(week) %>% 
+               summarise_at(vars("Obs",starts_with("sim")),funs(mean)) %>% ungroup()) %>% select(-c(week...8,DEN_UTS...7)) %>%
+   rename(week=week...6,DEN_UTS=DEN_UTS...2)
New names:
* DEN_UTS -> DEN_UTS...2
* week -> week...6
* DEN_UTS -> DEN_UTS...7
* week -> week...8
Warning message:
`funs()` is deprecated as of dplyr 0.8.0.
Please use a list of either functions or lambdas: 

  # Simple named list: 
  list(mean = mean, median = median)

  # Auto named with `tibble::lst()`: 
  tibble::lst(mean, median)

  # Using lambdas
  list(~ mean(., trim = .2), ~ median(., na.rm = TRUE))
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated. 
> pred.italy$SMR_prov=pred.italy$SMR_prov %>% 
+   mutate(mean=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,mean,na.rm=TRUE),
+          median=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,median,na.rm=TRUE),
+          sd=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,sd,na.rm=TRUE),
+          low=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,quantile,.025,na.rm=TRUE),
+          upp=apply(as.matrix(pred.italy$SMR_prov %>% select(contains("sim"))),1,quantile,.975,na.rm=TRUE)) %>% 
+   select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean,sd,low,median,upp,everything())
> 
> pred.italy$prediction_prov=(pred.italy$prediction %>% group_by(DEN_UTS,week) %>% arrange(COD_PROV,week) %>% slice(1) %>% 
+                               select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week) %>% ungroup()) %>% 
+   bind_cols(pred.italy$prediction %>% group_by(DEN_UTS,week) %>% arrange(COD_PROV,week) %>% 
+               summarise_at(vars("Obs",starts_with("sim")),funs(sum)) %>% ungroup()) %>% 
+   select(-c(week...8,DEN_UTS...7)) %>% rename(week=week...6,DEN_UTS=DEN_UTS...2)
New names:
* DEN_UTS -> DEN_UTS...2
* week -> week...6
* DEN_UTS -> DEN_UTS...7
* week -> week...8
> pred.italy$prediction_prov=pred.italy$prediction_prov %>% 
+   mutate(mean=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,mean,na.rm=TRUE),
+          median=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,median,na.rm=TRUE),
+          sd=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,sd,na.rm=TRUE),
+          low=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,quantile,.025,na.rm=TRUE),
+          upp=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),1,quantile,.975,na.rm=TRUE)) %>% 
+   select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean,sd,low,median,upp,everything())
> 
> # Aggregates by region
> pred.italy$prediction_reg=(pred.italy$prediction %>% group_by(DEN_REG,week) %>% arrange(COD_PROV,week) %>% slice(1) %>% 
+                              select(COD_REG,DEN_REG,week) %>% ungroup()) %>% 
+   bind_cols(pred.italy$prediction %>% group_by(DEN_REG,week) %>% arrange(COD_PROV,week) %>% 
+               summarise_at(vars("Obs",starts_with("sim")),funs(sum)) %>% ungroup()) %>% 
+   select(-c(week...5,DEN_REG...4)) %>% rename(week=week...3,DEN_REG=DEN_REG...2)
New names:
* DEN_REG -> DEN_REG...2
* week -> week...3
* DEN_REG -> DEN_REG...4
* week -> week...5
> pred.italy$prediction_reg=pred.italy$prediction_reg %>% 
+   mutate(mean=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,mean,na.rm=TRUE),
+          median=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,median,na.rm=TRUE),
+          sd=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,sd,na.rm=TRUE),
+          low=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,quantile,.025,na.rm=TRUE),
+          upp=apply(as.matrix(pred.italy$prediction_reg %>% select(contains("sim"))),1,quantile,.975,na.rm=TRUE)) %>% 
+   select(DEN_REG,COD_REG,week,Obs,mean,sd,low,median,upp,everything())
> ####
> 
> # Creates excess deaths [= (est-obs)/obs]
> xs=-1*sweep(as.matrix(pred.italy$prediction_prov %>% select(contains("sim"))),
+             1,pred.italy$prediction_prov$Obs,FUN ="-")
> xs=sweep(as.matrix(xs),1,pred.italy$prediction_prov$Obs,FUN ="/")
> colnames(xs)=stringr::str_replace(colnames(xs),"sim","xs")
> pred.italy$prediction_prov=pred.italy$prediction_prov %>% bind_cols(as_tibble(xs)) 
> pred.italy$prediction_prov=pred.italy$prediction_prov %>% mutate(
+   mean.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,mean,na.rm=T),
+   sd.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,sd,na.rm=T),
+   low.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,quantile,.025,na.rm=T),
+   upp.excess=apply(as.matrix(pred.italy$prediction_prov %>% select(contains("xs"))),1,quantile,.975,na.rm=T)
+ ) %>% select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean.excess,sd.excess,low.excess,upp.excess,everything())
> 
> 
> # Creates excess deaths [= (est-obs)/obs]
> xs=-1*sweep(as.matrix(pred.italy$prediction %>% select(contains("sim"))),
+             1,pred.italy$prediction$Obs,FUN ="-")
> xs=sweep(as.matrix(xs),1,pred.italy$prediction$Obs,FUN ="/")
> colnames(xs)=stringr::str_replace(colnames(xs),"sim","xs")
> pred.italy$prediction=pred.italy$prediction %>% bind_cols(as_tibble(xs)) 
> pred.italy$prediction=pred.italy$prediction %>% 
+   mutate(
+     mean.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,mean,na.rm=T),
+     sd.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,sd,na.rm=T),
+     low.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,quantile,.025,na.rm=T),
+     upp.excess=apply(as.matrix(pred.italy$prediction %>% select(contains("xs"))),1,quantile,.975,na.rm=T)
+   ) %>% select(COD_PROV,DEN_UTS,SIGLA,DEN_REG,COD_REG,week,Obs,mean.excess,sd.excess,low.excess,upp.excess,everything())
> 
> 
> # Add population from official data and extrapolation at 2020
> pop.tot=as_tibble(read.table("p20_att.txt",header = TRUE))
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") :
  cannot open file 'p20_att.txt': No such file or directory
> # Aggregate by age groups
> pop.tot=pop.tot %>% rename(COD_PROVCOM=PRO_COM_ATT) %>% group_by(COD_PROVCOM) %>% mutate(Total.male=sum(n_m),Total.female=sum(n_f)) %>% 
+   slice(1) %>% ungroup() %>% select(COD_PROVCOM,contains("Total"))
Error in eval(lhs, parent, parent) : object 'pop.tot' not found
> # Add population to predictions
> pred.italy$prediction=pred.italy$prediction %>% left_join(pop.tot) %>% select(COD_PROVCOM,COMUNE,Total.male,Total.female,everything()) %>% 
+   mutate(mean.excess=case_when(Obs==0~NA_real_,
+                                TRUE~mean.excess),
+          sd.excess=case_when(Obs==0~NA_real_,
+                              TRUE~sd.excess),
+          low.excess=case_when(Obs==0~NA_real_,
+                               TRUE~low.excess),
+          upp.excess=case_when(Obs==0~NA_real_,
+                               TRUE~upp.excess)
+   )
Error in is.data.frame(y) : object 'pop.tot' not found
> save(pred.italy,file=paste0("Output/",Sex,"/predItaly.Rdata"))
> 
> 
> 
> ############################################################################
> # 6. VISUALISE THE RESULTS
> ############################################################################
> library(gridExtra)

Attaching package: ‘gridExtra’

The following object is masked from ‘package:dplyr’:

    combine

> 
> file=paste0("Output/",Sex,"/predItaly.Rdata")
> load(file)
> 
> # Plots the predicted deaths by comune & week
> 
> # Selects comuni with the highest percentage excess deaths
> comuni2plot=as.character((pred.italy$prediction %>% filter(Total.male+Total.female>=100000) %>% arrange(desc(mean.excess)) %>% 
+                             select(mean.excess,COMUNE,DEN_UTS,DEN_REG) %>% group_by(COMUNE) %>% slice(1) %>% ungroup() %>% 
+                             arrange(desc(mean.excess)))$COMUNE)[1:9]
Error: Problem with `filter()` input `..1`.
x object 'Total.male' not found
i Input `..1` is `Total.male + Total.female >= 1e+05`.
Run `rlang::last_error()` to see where the error occurred.
> comuni2plot=c("Bergamo","Piacenza","Brescia","Parma","Milano","Cremona","Pesaro","Verona","Bologna")
> 
> 
> 
> plot.list=lapply(comuni2plot,function(x) vis.xs.rate(pred.italy,x, sex="Females"))
Loading required package: ggplot2
 Error in FUN(X[[i]], ...) : 
  only defined on a data frame with all numeric variables > pdf("excess_rate_females.pdf",width=30,height=24)
> do.call(grid.arrange,c(plot.list,ncol=3))
Error in do.call(grid.arrange, c(plot.list, ncol = 3)) : 
  object 'plot.list' not found
> dev.off()
null device 
          1 
> 
> plot.list2=lapply(comuni2plot,function(x) vis.rate(pred.italy,x,sex="Females", multi=TRUE))
 Error: Problem with `mutate()` input `Total`.
x object 'Total.female' not found
i Input `Total` is `Total.female`.
Run `rlang::last_error()` to see where the error occurred. > pdf("excess_mortality_females.pdf",width=30,height=24)
> do.call(grid.arrange,c(plot.list2,ncol=3))
Error in do.call(grid.arrange, c(plot.list2, ncol = 3)) : 
  object 'plot.list2' not found
> dev.off()
null device 
          1 
> 
> 
> pdf("WeeklyTrendProv_Females.pdf")
> lemon::grid_arrange_shared_legend(
+   make.map(1),make.map(2),make.map(3),make.map(4),make.map(5),make.map(6),
+   make.map(7),make.map(8),make.map(9),
+   make.map(10),make.map(11),make.map(12),
+   make.map(13),nrow=4,ncol=4,position="bottom" 
+ )
Error in make.map(1) : object 'prov.shp' not found
> dev.off()
null device 
          1 
> 
> 
> # Overall trends
> # BUT MAKE TRENDS x10,000
> 
> vis.trends(pred.italy,"Lombardia")
 Error in FUN(X[[i]], ...) : 
  only defined on a data frame with all numeric variables > regions=as.character((pred.italy$prediction_reg %>% group_by(DEN_REG) %>% slice(1) %>% ungroup())$DEN_REG)
> plot.list=lapply(regions,function(x) vis.trends(pred.italy,x))
Warning messages:
1: Ignoring unknown parameters: face 
2: Ignoring unknown parameters: face 
3: Ignoring unknown parameters: face 
> pdf("total_trends.pdf",width=38,height=30)
> do.call(grid.arrange,c(plot.list,ncol=4))
> dev.off()
null device 
          1 
> 
> relevant=list()
> relevant[[1]]=c("Piemonte","Valle d'Aosta", "Liguria")
> relevant[[2]]=c("Lombardia")
> relevant[[3]]=c("Emilia-Romagna", "Trentino-Alto Adige","Veneto","Friuli Venezia Giulia")
> relevant[[4]]=c("Abruzzo","Lazio","Marche","Molise","Toscana","Umbria")
> relevant[[5]]=c("Basilicata","Calabria","Campania","Puglia","Sardegna","Sicilia")
> names(relevant)=c("NordOvest","Lombardia","NordEst","Centro","Sud")
> 
> groups=list()
> groups[[1]]=c("Piemonte","Valle d'Aosta", "Liguria")
> groups[[2]]="Lombardia"
> groups[[3]]=c("Emilia-Romagna", "Trentino-Alto Adige","Veneto","Friuli Venezia Giulia")
> groups[[4]]=c("Lazio","Marche","Toscana","Umbria")
> groups[[5]]=c("Abruzzo","Basilicata","Calabria","Campania","Molise","Puglia","Sardegna","Sicilia")
> names(groups)=c("North-western Italy","Lombardia","North-eastern Italy","Central Italy","Southern Italy + Islands")
> 
> plot.list=lapply(groups, function(x) vis.trends(pred.italy,x,ylim=c(500,4000)))
Error in FUN(X[[i]], ...) : 
  only defined on a data frame with all numeric variables
In addition: Warning message:
 Error in FUN(X[[i]], ...) : 
  only defined on a data frame with all numeric variables > pdf(paste0("total_trends_macroarea_",Sex,".pdf"),width=18,height=16)
> do.call(grid.arrange,c(plot.list,ncol=2))
> dev.off()
null device 
          1 
> 
> grid.arrange(arrangeGrob(plot.list,ncol=2,heights=c(4,1),widths=c(2,1)))
Error in gList(list(list(data = list(DEN_REG = c(8L, 8L, 8L, 8L, 8L, 8L,  : 
  only 'grobs' allowed in "gList"
> 
> 
> # This will tell me the posterior probability that Lombardia is outside the observed range in week 9 to complement the text
> datatemp=males$prediction_reg %>% filter(DEN_REG %in% region) %>% arrange(week) 
Error in eval(lhs, parent, parent) : object 'males' not found
>   datatemp=datatemp %>% 
+     mutate(low=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[1]),
+            upp=apply(as.matrix(datatemp %>% select(contains("sim"))),1,quantile,q[2]))
Error in eval(lhs, parent, parent) : object 'datatemp' not found
>   datatemp=datatemp %>% left_join(pred$prediction %>% filter(week==1) %>% group_by(DEN_REG) %>% summarise(pop=sum(Total.male)) %>% ungroup()) %>% 
+     select(DEN_REG,pop,everything())
Error in eval(lhs, parent, parent) : object 'datatemp' not found
> datatemp %>% mutate(prob=apply(as.matrix(datatemp %>% select(contains("sim"))),1,function(x) sum(x/pop*scale > Obs/pop*scale)/1000)) %>% select(prob,everything())
Error in eval(lhs, parent, parent) : object 'datatemp' not found

If I look in the ./Output/Females folder, I see 4 files:

  1. Is it correct, or do we have to fix those errors? If so, how?

  2. Our goal is to get age stratified excess mortality for Piedmont ( so we need at least regional resolution). I think they are in predItaly[["predictions_prov"]]: is it right?

  3. Unfortunately we are not domain expert. Could you provide an explicit interpretation of the output files and their fields?

Thank you very much.

giabaio commented 3 years ago

Thanks for this. I think there are a few things to mention here. 1) The first batch of errors don't seem really errors, but warnings and so I think they don't stop the process --- simply return a message (and this is to due with the newer version of dplyr which has since been released (our code has not been modified since last May when we did the analysis). So the code

Warning messages:
1: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`. 
2: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`. 
3: The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.

should not stop the script (can you confirm)?

2) Then INLA is successfully run, as far as I can tell from your output and you do manage to save the resulting output

> file=paste0("Output/",Sex,"/output",area,".Rdata")
> save(m,file=file)
Warning message:
In save(m, file = file) : 'package:stats' may not be available when loading

3) The first real error, I think, happens here:

> pred.italy=list()
> for (i in names(macro.regions)){
+   file=paste0("Output/",Sex,"/predictions",i,".Rdata")
+   load(file)
+   pred.italy$SMR=bind_rows(pred.italy$SMR,res$SMR)
+   pred.italy$prediction=bind_rows(pred.italy$prediction,res$prediction)
+   rm(res)
+ }
Error in readChar(con, 5L, useBytes = TRUE) : cannot open the connection
In addition: Warning message:
In readChar(con, 5L, useBytes = TRUE) :
  cannot open compressed file 'Output/Females/predictionsLombardia.Rdata', probable reason 'No such file or directory'
> 

After you've run the command make.predictions(area,Sex), instead of going on with the script, can you check what the results are? Specifically, if you look in the folder Output/Females what do you see, at this point?

Everything else, I think, sort of depends on this and you're missing one file, which in turn doesn't let you save other outputs, which in turn break all the script...

Have you tried running the script bit-by-bit (following the numbering)? Can you post the list of files you get in Output/Females or even better, can you run make.predictions(area,Sex) and see what happens? The script is made so that once a time-consuming step is finished, the necessary output is saved somewhere so you can stop the process and then re-start from that step with the next one...

ClaudMor commented 3 years ago

Hello,

Thanks for the reply.

  1. I can confirm that those warnings don't stop the script.

  2. So I ran the script above up to make.predictions and got:

> # Code for the paper
> # "Estimating weekly excess mortality at sub-national level in Italy during the COVID-19 pandemic"
> # by Marta Blangiardo, Michela Cameletti, Monica Pirani, Gianni Corsetti, Marco Battaglini, Gianluca Baio
> 
> # Last version: 13/08/2020
> 
> library(dplyr)

Attaching package: ‘dplyr’

The following objects are masked from ‘package:stats’:

    filter, lag

The following objects are masked from ‘package:base’:

    intersect, setdiff, setequal, union

> library(INLA)
Loading required package: Matrix
Loading required package: sp
Loading required package: parallel
Loading required package: foreach
This is INLA_20.03.17 built 2020-11-16 11:05:43 UTC.
See www.r-inla.org/contact-us for how to get help.
> 
> # Load extra functions
> source("make.functions.R")
> 
> ############################################################################
> # 1. PREPARE DATA 
> ############################################################################
> ## Load the macro areas data
> load("./Data/MacroRegions.Rdata")
> 
> # Now prepare the data
> Sex = "Females" #other possible choice: Males
> area = "NordOvest" #other possible choices: NordEst, Sud, Centro, Lombardia
> data = make.data(macro.regions,area,Sex=Sex)
Warning messages:
1: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`. 
2: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`. 
3: The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated. 
> graph = paste0("./Graphs/",tolower(area),".graph")
> 
> ############################################################################
> 
> 
> ############################################################################
> # 2. RUN INLA & SAVE THE OUTPUT 
> ############################################################################
> 
> ## Formula with temperature
> formula = morti ~ 1 + 
+   f(ID1,model="bym",graph=graph,scale.model=T,
+     hyper=list(theta1=list(prior="loggamma",param=c(1,0.1)),theta2=list(prior="loggamma",param=c(1,0.1)))) +
+   f(week,model="rw1",replicate=ID_prov,scale.model=TRUE,hyper=list(prec=list(prior="loggamma",param=c(1,0.1)))) +
+   f(Anno,model="iid") +
+   f(IDtemp,model="rw2",scale.model=TRUE,hyper=list(theta=list(prior="loggamma",param=c(1,0.1))))
> 
> 
> # INLA SET UP
> # Under Poisson uses default set up
> control.family=inla.set.control.family.default()
> # Defines the correct variable to offset the rates in the log-linear predictor
> offset = data$E
> 
> m = inla(formula,
+          data=data,
+          E=offset,
+          family="Poisson",
+          control.family=control.family,
+          verbose = TRUE,
+          num.threads = round(parallel::detectCores()*.8),
+          control.compute=list(config = TRUE)
+ )

    hgid: 8b30e851fed2  date: Tue Mar 17 10:38:12 2020 +0300
Report bugs to <help@r-inla.org>
Process file[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/Model.ini] threads[13] blas_threads[1]
inla_build...
    number of sections=[11]
    parse section=[0] name=[INLA.libR] type=[LIBR]
    inla_parse_libR...
        section[INLA.libR]
            R_HOME=[C:/PROGRA~1/R/R-40~1.3]
    parse section=[10] name=[INLA.Expert] type=[EXPERT]
    inla_parse_expert...
        section[INLA.Expert]
            disable.gaussian.check=[0]
            cpo.manual=[0]
            jp.file=[(null)]
            jp.model=[(null)]
    parse section=[1] name=[INLA.Model] type=[PROBLEM]
    inla_parse_problem...
        name=[INLA.Model]
        R-INLA tag=[Version_20.03.17]
        Build tag=[Version_20.03.17]
        openmp.strategy=[default]
        pardiso-library installed and working? = [no]
        smtp = [taucs]
        strategy = [default]
    store results in directory=[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/results.files]
        output:
            cpo=[0]
            po=[0]
            dic=[0]
            kld=[1]
            mlik=[1]
            q=[0]
            graph=[0]
            gdensity=[0]
            hyperparameters=[1]
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[3] name=[Predictor] type=[PREDICTOR]
    inla_parse_predictor ...
        section=[Predictor]
        dir=[predictor]
        PRIOR->name=[loggamma]
        hyperid=[53001|Predictor]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 1e-005]
        initialise log_precision[12]
        fixed=[1]
        user.scale=[1]
        n=[139400]
        m=[0]
        ndata=[139400]
        compute=[0]
        read offsets from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 0/139400  (idx,y) = (0, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 1/139400  (idx,y) = (1, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 2/139400  (idx,y) = (2, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 3/139400  (idx,y) = (3, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 4/139400  (idx,y) = (4, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 5/139400  (idx,y) = (5, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 6/139400  (idx,y) = (6, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 7/139400  (idx,y) = (7, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 8/139400  (idx,y) = (8, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 9/139400  (idx,y) = (9, 0)
        Aext=[(null)]
        AextPrecision=[1e+008]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[2] name=[INLA.Data1] type=[DATA]
    inla_parse_data [section 1]...
        tag=[INLA.Data1]
        family=[POISSON]
        likelihood=[POISSON]
        file->name=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f027895b5b]
        file->name=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f08127ab0]
        file->name=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f04d82196]
        read n=[418200] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f027895b5b]
        mdata.nattributes = 0
            0/139400  (idx,a,y,d) = (0, 0.61143, 1, 1)
            1/139400  (idx,a,y,d) = (1, 0.61143, 0, 1)
            2/139400  (idx,a,y,d) = (2, 0.61143, 0, 1)
            3/139400  (idx,a,y,d) = (3, 0.61143, 0, 1)
            4/139400  (idx,a,y,d) = (4, 0.61143, 2, 1)
            5/139400  (idx,a,y,d) = (5, 0.61143, 0, 1)
            6/139400  (idx,a,y,d) = (6, 0.61143, 0, 1)
            7/139400  (idx,a,y,d) = (7, 0.61143, 0, 1)
            8/139400  (idx,a,y,d) = (8, 0.61143, 2, 1)
            9/139400  (idx,a,y,d) = (9, 0.61143, 0, 1)
        likelihood.variant=[0]
        Link model   [LOG]
        Link order   [-1]
        Link variant [-1]
        Link ntheta  [0]
        mix.use[0]
    parse section=[5] name=[ID1] type=[FFIELD]
    inla_parse_ffield...
        section=[ID1]
        dir=[random.effect00000001]
        model=[bym]
        PRIOR0->name=[loggamma]
        hyperid=[10001|ID1]
        PRIOR0->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR0->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR0->PARAMETERS0=[1, 0.1]
        PRIOR1->name=[loggamma]
        hyperid=[10002|ID1]
        PRIOR1->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR1->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR1->PARAMETERS1=[1, 0.1]
        correct=[-1]
        constr=[0]
        diagonal=[1.01511e-005]
        id.names=<not present>
        compute=[1]
        nrep=[1]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 0/139400  (idx,y) = (0, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 1/139400  (idx,y) = (1, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 2/139400  (idx,y) = (2, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 3/139400  (idx,y) = (3, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 4/139400  (idx,y) = (4, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 5/139400  (idx,y) = (5, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 6/139400  (idx,y) = (6, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 7/139400  (idx,y) = (7, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 8/139400  (idx,y) = (8, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 9/139400  (idx,y) = (9, 0)
        read graph from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f035f73d0e]
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f060e11874]
            nlocations=[2050]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
            locations[4]=[5]
            locations[5]=[6]
            locations[6]=[7]
            locations[7]=[8]
            locations[8]=[9]
            locations[9]=[10]
        initialise log_precision (iid component)[4]
        fixed=[0]
        initialise log_precision (spatial component)[4]
        fixed=[0]
        adjust.for.con.comp[1]
        scale.model[1]
        connected component[0] size[2050] scale[0.522871]
        scale.model: prec_scale[0.522871]
        read extra constraint from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06cb54d15]
        Constraint[0]
            A[2050] = 1.000000
            A[2051] = 1.000000
            A[2052] = 1.000000
            A[2053] = 1.000000
            A[2054] = 1.000000
            A[2055] = 1.000000
            A[2056] = 1.000000
            A[2057] = 1.000000
            A[2058] = 1.000000
            A[2059] = 1.000000
            A[2060] = 1.000000
            e[0] = 0.000000
        rank-deficiency is *defined* [1]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[6] name=[week] type=[FFIELD]
    inla_parse_ffield...
        section=[week]
        dir=[random.effect00000002]
        model=[rw1]
        PRIOR->name=[loggamma]
        hyperid=[4001|week]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 0.1]
        correct=[-1]
        constr=[1]
        diagonal=[1.01511e-005]
        id.names=<not present>
        compute=[1]
        nrep=[19]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 0/139400  (idx,y) = (0, 255)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 1/139400  (idx,y) = (1, 256)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 2/139400  (idx,y) = (2, 257)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 3/139400  (idx,y) = (3, 258)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 4/139400  (idx,y) = (4, 259)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 5/139400  (idx,y) = (5, 260)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 6/139400  (idx,y) = (6, 261)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 7/139400  (idx,y) = (7, 262)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 8/139400  (idx,y) = (8, 263)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 9/139400  (idx,y) = (9, 264)
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f01472b0e]
            nlocations=[17]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
            locations[4]=[5]
            locations[5]=[6]
            locations[6]=[7]
            locations[7]=[8]
            locations[8]=[9]
            locations[9]=[10]
        cyclic=[0]
        initialise log_precision[4]
        fixed=[0]
        scale.model[1]
        scale.model: prec_scale[2.41758]
        computed/guessed rank-deficiency = [1]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[7] name=[Anno] type=[FFIELD]
    inla_parse_ffield...
        section=[Anno]
        dir=[random.effect00000003]
        model=[iid]
        PRIOR->name=[loggamma]
        hyperid=[1001|Anno]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 5e-005]
        correct=[-1]
        constr=[0]
        diagonal=[0]
        id.names=<not present>
        compute=[1]
        nrep=[1]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 0/139400  (idx,y) = (0, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 1/139400  (idx,y) = (1, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 2/139400  (idx,y) = (2, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 3/139400  (idx,y) = (3, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 4/139400  (idx,y) = (4, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 5/139400  (idx,y) = (5, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 6/139400  (idx,y) = (6, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 7/139400  (idx,y) = (7, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 8/139400  (idx,y) = (8, 0)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 9/139400  (idx,y) = (9, 0)
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f058f67d9b]
            nlocations=[4]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
        cyclic=[0]
        initialise log_precision[4]
        fixed=[0]
        computed/guessed rank-deficiency = [0]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    parse section=[8] name=[IDtemp] type=[FFIELD]
    inla_parse_ffield...
        section=[IDtemp]
        dir=[random.effect00000004]
        model=[rw2]
        PRIOR->name=[loggamma]
        hyperid=[5001|IDtemp]
        PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
        PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
        PRIOR->PARAMETERS=[1, 0.1]
        correct=[-1]
        constr=[1]
        diagonal=[1.01511e-005]
        id.names=<not present>
        compute=[1]
        nrep=[1]
        ngroup=[1]
        read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 0/139400  (idx,y) = (0, 17)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 1/139400  (idx,y) = (1, 28)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 2/139400  (idx,y) = (2, 10)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 3/139400  (idx,y) = (3, 28)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 4/139400  (idx,y) = (4, 49)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 5/139400  (idx,y) = (5, 31)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 6/139400  (idx,y) = (6, 27)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 7/139400  (idx,y) = (7, 43)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 8/139400  (idx,y) = (8, 40)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 9/139400  (idx,y) = (9, 22)
        file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06c473572]
            nlocations=[100]
            locations[0]=[1]
            locations[1]=[2]
            locations[2]=[3]
            locations[3]=[4]
            locations[4]=[5]
            locations[5]=[6]
            locations[6]=[7]
            locations[7]=[8]
            locations[8]=[9]
            locations[9]=[10]
        cyclic=[0]
        initialise log_precision[4]
        fixed=[0]
        scale.model[1]
        scale.model: prec_scale[1678.49]
        computed/guessed rank-deficiency = [2]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    section=[4] name=[(Intercept)] type=[LINEAR]
    inla_parse_linear...
        section[(Intercept)]
        dir=[fixed.effect00000001]
        file for covariates=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651]
        read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651]
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 0/139400  (idx,y) = (0, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 1/139400  (idx,y) = (1, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 2/139400  (idx,y) = (2, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 3/139400  (idx,y) = (3, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 4/139400  (idx,y) = (4, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 5/139400  (idx,y) = (5, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 6/139400  (idx,y) = (6, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 7/139400  (idx,y) = (7, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 8/139400  (idx,y) = (8, 1)
        file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 9/139400  (idx,y) = (9, 1)
        prior mean=[0]
        prior precision=[0]
        compute=[1]
        output:
            summary=[1]
            return.marginals=[1]
            nquantiles=[3]  [ 0.025 0.5 0.975 ]
            ncdf=[0]  [ ]
    Index table: number of entries[6], total length[143928]
        tag                            start-index     length
        Predictor                               0     139400
        ID1                                139400       4100
        week                               143500        323
        Anno                               143823          4
        IDtemp                             143827        100
        (Intercept)                        143927          1
    parse section=[9] name=[INLA.Parameters] type=[INLA]
    inla_parse_INLA...
        section[INLA.Parameters]
            lincomb.derived.only = [Yes]
            lincomb.derived.correlation.matrix = [No]
        global_node.factor = 2.000
        global_node.degree = 2147483647
        reordering = -1
Contents of ai_param 0000000005B4BD90
    Optimiser: DEFAULT METHOD
        Option for GSL-BFGS2: tol  = 0.1
        Option for GSL-BFGS2: step_size = 1
        Option for GSL-BFGS2: epsx = 0.005
        Option for GSL-BFGS2: epsf = 0.000353553
        Option for GSL-BFGS2: epsg = 0.005
        Restart: 0
        Mode known: No
    Gaussian approximation:
        tolerance_func = 0.0005
        tolerance_step = 0.0005
        optpar_fp = 0
        optpar_nr_step_factor = -0.1
    Gaussian data: No
    Strategy:   Use a mean-skew corrected Gaussian by fitting a Skew-Normal
    Fast mode:  On
    Use linear approximation to log(|Q +c|)? Yes
        Method:  Compute the derivative exact
    Parameters for improved approximations
        Number of points evaluate:   9
        Step length to compute derivatives numerically:  0.000100002
        Stencil to compute derivatives numerically:  5
        Cutoff value to construct local neigborhood:     0.0001
    Log calculations:    On
    Log calculated marginal for the hyperparameters:     On
    Integration strategy:    Automatic (GRID for dim(theta)=1 and 2 and otherwise CCD)
        f0 (CCD only):   1.100000
        dz (GRID only):  0.750000
        Adjust weights (GRID only):  On
        Difference in log-density limit (GRID only):     6.000000
        Skip configurations with (presumed) small density (GRID only):   On
    Gradient is computed using Central difference with step-length 0.010000
    Hessian is computed using Central difference with step-length 0.100000
    Hessian matrix is forced to be a diagonal matrix? [No]
    Compute effective number of parameters? [Yes]
    Perform a Monte Carlo error-test? [No]
    Interpolator [Auto]
    CPO required diff in log-density [3]
    Stupid search mode:
        Status     [On]
        Max iter   [1000]
        Factor     [1.05]
    Numerical integration of hyperparameters:
        Maximum number of function evaluations [100000]
        Relative error ....................... [1e-005]
        Absolute error ....................... [1e-006]
    To stabilise the numerical optimisation:
        Minimum value of the -Hessian [-1.#INF]
        Strategy for the linear term [Keep]
    CPO manual calculation[No]
    Laplace-correction is Disabled.

inla_build: check for unused entries in[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/Model.ini]
inla_INLA...
    Strategy = [DEFAULT]
    Sparse-matrix library... = [taucs]
    OpenMP strategy......... = [huge]
    Density-strategy........ = [Low]
    Size of graph........... = [143928]
    Number of constraints... = [21]
    Found optimal reordering=[amdc] nnz(L)=[1311252] and use_global_nodes(user)=[no]
    List of hyperparameters: 
        theta[0] = [Log precision for ID1 (idd component)]
        theta[1] = [Log precision for ID1 (spatial component)]
        theta[2] = [Log precision for week]
        theta[3] = [Log precision for Anno]
        theta[4] = [Log precision for IDtemp]
Optimise using DEFAULT METHOD
max.logdens= -96250.2972 fn= 1 theta= 4.0100 4.0000 4.0000 4.0000 4.0000  range=[-0.57 0.87]
max.logdens= -96219.2278 fn= 11 theta= 4.7114 4.2403 4.6589 4.0302 3.9658  range=[-0.55 0.76]
max.logdens= -96219.1571 fn= 12 theta= 4.7114 4.2403 4.6589 4.0302 3.9758  range=[-0.55 0.76]
max.logdens= -96218.9925 fn= 13 theta= 4.7114 4.2403 4.6689 4.0302 3.9658  range=[-0.55 0.76]
Iter=1 |grad|=17.4 |x-x.old|=0.447 |f-f.old|=31.5 
max.logdens= -96215.9208 fn= 23 theta= 4.6077 4.0979 4.9817 4.0792 3.9266  range=[-0.55 0.79]
max.logdens= -96215.8882 fn= 24 theta= 4.6077 4.0979 4.9917 4.0792 3.9266  range=[-0.55 0.79]
max.logdens= -96215.8857 fn= 26 theta= 4.6077 4.0979 4.9817 4.0792 3.9166  range=[-0.55 0.79]
max.logdens= -96215.8802 fn= 29 theta= 4.6177 4.0979 4.9817 4.0792 3.9266  range=[-0.55 0.79]
max.logdens= -96215.8789 fn= 31 theta= 4.6077 4.0979 4.9817 4.0892 3.9266  range=[-0.55 0.79]
Iter=2 |grad|=4.23 |x-x.old|=0.167 |f-f.old|=3.31 
max.logdens= -96215.6120 fn= 35 theta= 4.6854 4.1372 5.0459 4.1556 3.8729  range=[-0.55 0.77]
max.logdens= -96215.5920 fn= 36 theta= 4.6854 4.1372 5.0559 4.1556 3.8729  range=[-0.55 0.77]
max.logdens= -96215.5650 fn= 37 theta= 4.6854 4.1372 5.0459 4.1556 3.8629  range=[-0.55 0.77]
max.logdens= -96215.5594 fn= 40 theta= 4.6854 4.1272 5.0459 4.1556 3.8729  range=[-0.55 0.78]
max.logdens= -96215.5556 fn= 42 theta= 4.6854 4.1372 5.0459 4.1656 3.8729  range=[-0.55 0.77]
Iter=3 |grad|=3.78 |x-x.old|=0.064 |f-f.old|=0.309 
max.logdens= -96215.0678 fn= 46 theta= 4.6841 4.1003 5.0745 4.3431 3.7536  range=[-0.55 0.78]
max.logdens= -96215.0613 fn= 49 theta= 4.6841 4.1003 5.0745 4.3431 3.7436  range=[-0.55 0.78]
max.logdens= -96215.0462 fn= 53 theta= 4.6841 4.1003 5.0745 4.3531 3.7536  range=[-0.55 0.78]
max.logdens= -96214.7201 fn= 57 theta= 4.6827 4.0607 5.1051 4.5438 3.6259  range=[-0.55 0.78]
max.logdens= -96214.7195 fn= 60 theta= 4.6827 4.0607 5.1051 4.5438 3.6159  range=[-0.55 0.78]
max.logdens= -96214.6982 fn= 64 theta= 4.6827 4.0607 5.1051 4.5538 3.6259  range=[-0.55 0.78]
max.logdens= -96214.6894 fn= 65 theta= 4.6827 4.0607 5.0951 4.5438 3.6259  range=[-0.55 0.78]
max.logdens= -96214.5806 fn= 69 theta= 4.6810 4.0110 5.1435 4.7952 3.4659  range=[-0.55 0.79]
max.logdens= -96214.5751 fn= 70 theta= 4.6810 4.0110 5.1435 4.7952 3.4559  range=[-0.55 0.79]
max.logdens= -96214.5625 fn= 72 theta= 4.6810 4.0110 5.1435 4.7952 3.4759  range=[-0.55 0.79]
max.logdens= -96214.5220 fn= 77 theta= 4.6810 4.0110 5.1335 4.7952 3.4659  range=[-0.55 0.79]
Iter=4 |grad|=6.94 |x-x.old|=0.346 |f-f.old|=1.03 
max.logdens= -96213.2111 fn= 80 theta= 4.6921 4.0598 5.0180 5.4982 3.1000  range=[-0.55 0.78]
max.logdens= -96213.1908 fn= 83 theta= 4.6921 4.0598 5.0180 5.4982 3.1100  range=[-0.55 0.78]
max.logdens= -96213.1216 fn= 92 theta= 4.6954 4.0742 4.9809 5.7061 2.9918  range=[-0.54 0.78]
max.logdens= -96213.0983 fn= 94 theta= 4.6954 4.0742 4.9809 5.7061 3.0018  range=[-0.54 0.78]
Iter=5 |grad|=3.63 |x-x.old|=0.466 |f-f.old|=1.46 
max.logdens= -96212.6772 fn= 104 theta= 4.6090 4.1596 5.0397 6.1392 2.9764  range=[-0.54 0.77]
max.logdens= -96212.6529 fn= 105 theta= 4.6090 4.1596 5.0397 6.1392 2.9864  range=[-0.54 0.77]
Iter=6 |grad|=3.27 |x-x.old|=0.203 |f-f.old|=0.444 
max.logdens= -96212.0271 fn= 115 theta= 4.6394 4.1127 5.0223 6.5846 3.1337  range=[-0.54 0.78]
max.logdens= -96212.0082 fn= 117 theta= 4.6394 4.1127 5.0223 6.5846 3.1437  range=[-0.54 0.78]
max.logdens= -96212.0069 fn= 136 theta= 4.6699 4.0758 5.0050 7.0300 3.2910  range=[-0.54 0.78]
max.logdens= -96211.9308 fn= 137 theta= 4.6556 4.0878 5.0131 6.8214 3.2173  range=[-0.54 0.78]
max.logdens= -96211.9172 fn= 139 theta= 4.6556 4.0878 5.0131 6.8214 3.2273  range=[-0.54 0.78]
Iter=7 |grad|=1.97 |x-x.old|=0.326 |f-f.old|=0.746 
max.logdens= -96211.7961 fn= 149 theta= 4.6767 4.1390 5.0375 6.8401 3.3465  range=[-0.54 0.77]
max.logdens= -96211.7699 fn= 151 theta= 4.6767 4.1290 5.0375 6.8401 3.3465  range=[-0.54 0.77]
Iter=8 |grad|=2.66 |x-x.old|=0.0643 |f-f.old|=0.135 
max.logdens= -96211.6479 fn= 160 theta= 4.6400 4.1280 5.0413 6.8377 3.5261  range=[-0.55 0.78]
max.logdens= -96211.6408 fn= 161 theta= 4.6400 4.1180 5.0413 6.8377 3.5261  range=[-0.55 0.78]
max.logdens= -96211.6406 fn= 179 theta= 4.6399 4.1180 5.0413 6.8377 3.5265  range=[-0.55 0.78]
max.logdens= -96211.6405 fn= 188 theta= 4.6399 4.1180 5.0413 6.8377 3.5267  range=[-0.55 0.78]
max.logdens= -96211.6403 fn= 215 theta= 4.6399 4.1180 5.0413 6.8377 3.5267  range=[-0.55 0.78]
Iter=9 |grad|=1.23 |x-x.old|=0.0824 |f-f.old|=0.149 Reached numerical limit!
Optim: Number of function evaluations = 224
Compute the Hessian using central differences and step_size[0.1]. Matrix-type [dense]
max.logdens= -96211.6003 fn= 226 theta= 4.6399 4.1280 5.0413 6.7377 3.5267  range=[-0.55 0.78]
Mode not sufficient accurate; switch to a stupid local search strategy.
max.logdens= -96211.5850 fn= 238 theta= 4.6399 4.1280 5.0413 6.6327 3.5267  range=[-0.55 0.78]
max.logdens= -96211.5847 fn= 239 theta= 4.6399 4.1280 5.0413 6.7377 3.6317  range=[-0.55 0.78]
max.logdens= -96211.5691 fn= 250 theta= 4.6399 4.1280 5.0413 6.6275 3.6317  range=[-0.55 0.78]

max.logdens= -96211.5690 fn= 309 theta= 4.6399 4.1280 5.0413 6.6275 3.6317  range=[-0.55 0.78]
    47.923824    15.991226    -0.143604    -0.050339    -0.088881
    15.991226    45.140827    -0.072920     0.007869    -0.006437
    -0.143604    -0.072920    44.287863    -0.014403    -0.067313
    -0.050339     0.007869    -0.014403     2.771321     0.007341
    -0.088881    -0.006437    -0.067313     0.007341     4.990696
Eigenvectors of the Hessian
    0.737101    0.675764    0.004130    -0.002371   0.001357
    0.675728    -0.737107   0.008030    0.000781    -0.000698
    -0.008473   0.003123    0.999958    -0.001721   0.000345
    -0.000530   -0.001439   -0.000351   -0.003241   0.999994
    -0.001203   -0.002179   -0.001724   -0.999990   -0.003245
Eigenvalues of the Hessian
    62.585394
    30.480715
    44.286805
    4.990398
    2.771218
StDev/Correlation matrix (scaled inverse Hessian)
     0.153835    -0.343819     0.002756     0.004901     0.005966
                  0.158502     0.000585    -0.002345    -0.001640
                               0.150267     0.001303     0.004542
                                            0.600708    -0.001939
                                                         0.447644
Compute corrected stdev for theta[0]: negative 0.961023  positive 1.053797
Compute corrected stdev for theta[1]: negative 0.929659  positive 1.118742
Compute corrected stdev for theta[2]: negative 1.121293  positive 0.923264
Compute corrected stdev for theta[3]: negative 0.908362  positive 1.169934
Compute corrected stdev for theta[4]: negative 1.279785  positive 0.903255
    config  0/27=[ 1.159 -1.023 -1.233 -0.999 -1.408] log(rel.dens)=-2.964, [9] accept, compute, 1811.79s
    config  1/27=[ -0.000 -0.000 0.000 0.000 2.222] log(rel.dens)=-3.136, [4] accept, compute, 1822.76s
    config  2/27=[ 0.000 0.000 0.000 0.000 0.000] log(rel.dens)=-0.008, [0] accept, compute, 1829.57s
    config  3/27=[ 0.000 -0.000 2.271 0.000 -0.000] log(rel.dens)=-2.975, [2] accept, compute, 1829.27s
    config  4/27=[ -1.057 1.231 -1.233 -0.999 -1.408] log(rel.dens)=-2.962, [7] accept, compute, 1831.82s
    config  5/27=[ 0.000 2.752 0.000 0.000 -0.000] log(rel.dens)=-3.202, [1] accept, compute, 1836.11s
    config  6/27=[ 1.159 -1.023 1.016 -0.999 0.994] log(rel.dens)=-2.945, [10] accept, compute, 1837.52s
    config  7/27=[ -1.057 -1.023 1.016 -0.999 -1.408] log(rel.dens)=-3.264, [6] accept, compute, 1838.81s
    config  8/27=[ -1.057 -1.023 -1.233 -0.999 0.994] log(rel.dens)=-2.943, [5] accept, compute, 1840.14s
    config  9/27=[ -1.057 1.231 1.016 -0.999 0.994] log(rel.dens)=-2.948, [8] accept, compute, 1841.91s
    config 10/27=[ -0.000 -0.000 -0.000 2.878 -0.000] log(rel.dens)=-2.897, [3] accept, compute, 1842.85s
    config 11/27=[ 1.159 1.231 -1.233 -0.999 0.994] log(rel.dens)=-2.601, [11] accept, compute, 1799.88s
    config 12/27=[ 1.159 1.231 1.016 -0.999 -1.408] log(rel.dens)=-2.901, [12] accept, compute, 1827.93s
    config 13/27=[ 1.159 -1.023 -1.233 1.287 0.994] log(rel.dens)=-2.875, [9] accept, compute, 1800.71s
    config 14/27=[ -0.000 -0.000 0.000 -0.000 -3.148] log(rel.dens)=-2.549, [4] accept, compute, 1828.54s
    config 15/27=[ 0.000 -0.000 -2.758 -0.000 0.000] log(rel.dens)=-3.161, [2] accept, compute, 1829.26s
    config 16/27=[ -1.057 1.231 -1.233 1.287 0.994] log(rel.dens)=-2.880, [7] accept, compute, 1834.38s
    config 17/27=[ 2.592 -0.000 0.000 -0.000 0.000] log(rel.dens)=-3.032, [0] accept, compute, 1836.92s
    config 18/27=[ -0.000 -2.287 0.000 -0.000 -0.000] log(rel.dens)=-2.989, [1] accept, compute, 1839.75s
    config 19/27=[ 1.159 -1.023 1.016 1.287 -1.408] log(rel.dens)=-3.167, [10] accept, compute, 1840.65s
    config 20/27=[ -1.057 -1.023 1.016 1.287 0.994] log(rel.dens)=-3.175, [6] accept, compute, 1842.59s
    config 21/27=[ -1.057 -1.023 -1.233 1.287 -1.408] log(rel.dens)=-3.200, [5] accept, compute, 1840.58s
    config 22/27=[ -1.057 1.231 1.016 1.287 -1.408] log(rel.dens)=-3.150, [8] accept, compute, 1841.07s
    config 23/27=[ -0.000 0.000 -0.000 -2.234 0.000] log(rel.dens)=-3.117, [3] accept, compute, 1840.54s
    config 24/27=[ 1.159 1.231 -1.233 1.287 -1.408] log(rel.dens)=-2.789, [11] accept, compute, 1772.68s
    config 25/27=[ 1.159 1.231 1.016 1.287 0.994] log(rel.dens)=-2.811, [12] accept, compute, 1755.74s
    config 26/27=[ -2.364 -0.000 -0.000 0.000 0.000] log(rel.dens)=-3.052, [0] accept, compute, 329.30s
Combine the densities with relative weights:
    config  0/27=[ 0.000 0.000 0.000 0.000 0.000] weight = 1.000  neff = 664.77
    config  1/27=[ 2.592 -0.000 0.000 -0.000 0.000] weight = 0.183  neff = 598.52
    config  2/27=[ -2.364 -0.000 -0.000 0.000 0.000] weight = 0.180  neff = 730.00
    config  3/27=[ 0.000 2.752 0.000 0.000 -0.000] weight = 0.155  neff = 667.58
    config  4/27=[ -0.000 -2.287 0.000 -0.000 -0.000] weight = 0.191  neff = 686.72
    config  5/27=[ 0.000 -0.000 2.271 0.000 -0.000] weight = 0.194  neff = 648.87
    config  6/27=[ 0.000 -0.000 -2.758 -0.000 0.000] weight = 0.161  neff = 685.76
    config  7/27=[ -0.000 -0.000 -0.000 2.878 -0.000] weight = 0.210  neff = 670.11
    config  8/27=[ -0.000 0.000 -0.000 -2.234 0.000] weight = 0.168  neff = 661.63
    config  9/27=[ -0.000 -0.000 0.000 0.000 2.222] weight = 0.165  neff = 664.32
    config 10/27=[ -0.000 -0.000 0.000 -0.000 -3.148] weight = 0.297  neff = 665.13
    config 11/27=[ -1.057 -1.023 -1.233 -0.999 0.994] weight = 0.200  neff = 708.46
    config 12/27=[ -1.057 -1.023 -1.233 1.287 -1.408] weight = 0.155  neff = 712.61
    config 13/27=[ -1.057 -1.023 1.016 -0.999 -1.408] weight = 0.145  neff = 692.43
    config 14/27=[ -1.057 -1.023 1.016 1.287 0.994] weight = 0.159  neff = 695.77
    config 15/27=[ -1.057 1.231 -1.233 -0.999 -1.408] weight = 0.197  neff = 698.08
    config 16/27=[ -1.057 1.231 -1.233 1.287 0.994] weight = 0.214  neff = 701.40
    config 17/27=[ -1.057 1.231 1.016 -0.999 0.994] weight = 0.199  neff = 681.31
    config 18/27=[ -1.057 1.231 1.016 1.287 -1.408] weight = 0.163  neff = 685.21
    config 19/27=[ 1.159 -1.023 -1.233 -0.999 -1.408] weight = 0.196  neff = 648.93
    config 20/27=[ 1.159 -1.023 -1.233 1.287 0.994] weight = 0.214  neff = 652.26
    config 21/27=[ 1.159 -1.023 1.016 -0.999 0.994] weight = 0.200  neff = 632.13
    config 22/27=[ 1.159 -1.023 1.016 1.287 -1.408] weight = 0.160  neff = 636.22
    config 23/27=[ 1.159 1.231 -1.233 -0.999 0.994] weight = 0.282  neff = 639.72
    config 24/27=[ 1.159 1.231 -1.233 1.287 -1.408] weight = 0.234  neff = 643.58
    config 25/27=[ 1.159 1.231 1.016 -0.999 -1.408] weight = 0.209  neff = 623.55
    config 26/27=[ 1.159 1.231 1.016 1.287 0.994] weight = 0.229  neff = 626.86
Done.
Expected effective number of parameters: 665.603(28.161),  eqv.#replicates: 209.434
Marginal likelihood: Integration -96214.111984 Gaussian-approx -96213.965819
Compute the marginal for each of the 5 hyperparameters
Interpolation method: Auto
    Compute the marginal for theta[0] to theta[4] using numerical integration...
    Compute the marginal for theta[0] to theta[4] using numerical integration... Done.
Compute the marginal for the hyperparameters... done.
Store results in directory[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/results.files]

Wall-clock time used on [C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/Model.ini]
    Preparations    :   0.262 seconds
    Approx inference: 4284.324 seconds [0.3|0.0|5.1|94.5|0.1]%
    Output          :   0.366 seconds
    ---------------------------------
    Total           : 4284.952 seconds

> 
> file=paste0("Output/",Sex,"/output",area,".Rdata")
> save(m,file=file)
Warning message:
In save(m, file = file) : 'package:stats' may not be available when loading
> 
> ############################################################################
> # 3. GIVEN THE INLA OUTPUTS, 
> # SIMULATE FROM THE POSTERIOR DISTRIBUTION & SAVE THE OUTPUT 
> ############################################################################
> # 
> make.posteriors(area,Sex)
>   
> ############################################################################
> # 4. GIVEN THE INLA OUTPUTS AND SIMULATIONS,
> # COMPUTE THE PREDICTIONS FOR 2020 & SAVE THE OUTPUT 
> ############################################################################
> make.predictions(area,Sex)
Joining, by = c("COD_PROVCOM", "week")
Joining, by = c("COD_PROVCOM", "COMUNE", "COD_PROV", "DEN_UTS", "SIGLA", "DEN_REG", "COD_REG", "week", "Anno", "sex", "ID_Ita", "Vuln2011", "p.africa", "p.america", "p.asia", "p.europa", "age.group", "temperature", "temp_grp", "IDtemp")
There were 50 or more warnings (use warnings() to see the first 50)

The warnings are:

> warnings()
Warning messages:
1: In rpois(nsim, x) : NAs produced
2: In rpois(nsim, x) : NAs produced
3: In rpois(nsim, x) : NAs produced
4: In rpois(nsim, x) : NAs produced
5: In rpois(nsim, x) : NAs produced
6: In rpois(nsim, x) : NAs produced
7: In rpois(nsim, x) : NAs produced
8: In rpois(nsim, x) : NAs produced
9: In rpois(nsim, x) : NAs produced
10: In rpois(nsim, x) : NAs produced
11: In rpois(nsim, x) : NAs produced
12: In rpois(nsim, x) : NAs produced
13: In rpois(nsim, x) : NAs produced
14: In rpois(nsim, x) : NAs produced
15: In rpois(nsim, x) : NAs produced
16: In rpois(nsim, x) : NAs produced
17: In rpois(nsim, x) : NAs produced
18: In rpois(nsim, x) : NAs produced
19: In rpois(nsim, x) : NAs produced
20: In rpois(nsim, x) : NAs produced
21: In rpois(nsim, x) : NAs produced
22: In rpois(nsim, x) : NAs produced
23: In rpois(nsim, x) : NAs produced
24: In rpois(nsim, x) : NAs produced
25: In rpois(nsim, x) : NAs produced
26: In rpois(nsim, x) : NAs produced
27: In rpois(nsim, x) : NAs produced
28: In rpois(nsim, x) : NAs produced
29: In rpois(nsim, x) : NAs produced
30: In rpois(nsim, x) : NAs produced
31: In rpois(nsim, x) : NAs produced
32: In rpois(nsim, x) : NAs produced
33: In rpois(nsim, x) : NAs produced
34: In rpois(nsim, x) : NAs produced
35: In rpois(nsim, x) : NAs produced
36: In rpois(nsim, x) : NAs produced
37: In rpois(nsim, x) : NAs produced
38: In rpois(nsim, x) : NAs produced
39: In rpois(nsim, x) : NAs produced
40: In rpois(nsim, x) : NAs produced
41: In rpois(nsim, x) : NAs produced
42: In rpois(nsim, x) : NAs produced
43: In rpois(nsim, x) : NAs produced
44: In rpois(nsim, x) : NAs produced
45: In rpois(nsim, x) : NAs produced
46: In rpois(nsim, x) : NAs produced
47: In rpois(nsim, x) : NAs produced
48: In rpois(nsim, x) : NAs produced
49: In rpois(nsim, x) : NAs produced
50: In rpois(nsim, x) : NAs produced

The files in ./Output/Females are:

Thank you very much for your time.

giabaio commented 3 years ago

Are the files valid? So for example, if you try

load("Output/Females/outputNordOvest.Rdata")

does it throw an error?

The output you show (BTW: you don't need to replicate the first part of the script --- the code up to the INLA output seems to work OK. I think if there's an issue is in saving the output onto the relevant file and from there to use it in the rest of the script) seems to indicate a problem when simulating from the posterior predictive distribution. I think the script does define the input nsim (that is the number of simulations to draw from the approximate joint posterior distribution of all the model parameters, which in turn are used to construct the simulations from the posterior predictive distribution of the outcome, ie the expected number of deaths in 2020). So I suspect there may be an issue in the definition of the linear predictor (the parameter of the Poisson model used to simulate from the predictive distribution) --- I still think there probably is an issue with writing some files and then the script doesn't have all it needs to compute the relevant quantities...

I can go back and check the original script in case we missed something when uploading the files on the GitHub repo (which I don't think so) --- but I think it'd be good to check whether, eg, Output/Females/outputNordOvest.Rdata does store the relevant data and then the output of make.posteriors also creates the relevant objects...

giabaio commented 3 years ago

A good way of debugging this is actually to run the function make.posteriors kind of line-by-line. So define

area="NordOvest"
Sex="Female"
nsim=1000

and then try

library(INLA)
library(dplyr)
file=paste0("Output/",Sex,"/output",area,".Rdata")
load(file)

Here something should be clear already --- if the file is not correct, it'll throw an error. You could even inspect the loaded object to see what's inside it.

Then you can continue with the script and see if, line-by-line, you get an error that can explain what is wrong...

  data=as_tibble(m$.args$data)
  distr=m$.args$family

See if this is still OK. Then you can simulate from the posterior --- perhaps for the purpose of debugging, just do 1 simulation (notice I'm using the code inla.posterior.sample(n=1,...) below)

  # Now samples from the approximate joint posterior distribution of all the parameters
  tic=proc.time()
  post.samp=inla.posterior.sample(n=1,m,selection=list(
    "(Intercept)"=1,
    "week"=1:nrow(m$summary.random$week),
    "ID1"=1:(nrow(m$summary.random$ID1)/2), #NB Only needs the first half as there are 2 components to BYM
    "IDtemp"=1:nrow(m$summary.random$IDtemp),
    "Anno"=4
  ),
  num.threads = round(parallel::detectCores()*.8),
  verbose=FALSE
  )
  toc=proc.time()-tic

And again see what happens here (should be very fast as it's only one simulation). If there are no errors, try the rest

# Formats simulations in a matrix (like BUGS would do)
sim=matrix(unlist(lapply(post.samp,function(x) x$latent[1:nrow(post.samp[[1]]$latent),])),ncol=nrow(post.samp[[1]]$latent),byrow=T)
colnames(sim)=rownames(post.samp[[1]]$latent)

# Now simulates from the posterior of the hyperparameters (to get sd for the 'Anno' effect)
sd.anno=sqrt(1/inla.hyperpar.sample(nsim,m,improve.marginals=TRUE)[,"Precision for Anno"])
# Then simulates from the predictive distribution for Anno=5
anno.2020=rnorm(nsim,0,sd.anno)
# Then substitutes the relevant column in 'sim'
sim[,grep("Anno",colnames(sim))]=anno.2020
file=paste0("Output/",Sex,"/posteriors",area,".Rdata")
save(sim,data,distr,file=file)

At this point, it should have saved the simulations (sim) onto the file Output/Female/posteriorsNordOvest.Rdata, which you can then load and see what it contains, for continuous debugging...

ClaudMor commented 3 years ago

So i post below the outputs i got from your code:

This chunk runs with no errors.

area="NordOvest"
Sex="Females" # added the final 's' for compatibilty with folder structure
nsim=1000

library(INLA)
library(dplyr)
file=paste0("Output/",Sex,"/output",area,".Rdata")
load(file)

the loaded file is a List of 51. Here is a snapshot of its structure. Screenshot 2020-11-19 125455

This runs ok:

  data=as_tibble(m$.args$data)
  distr=m$.args$family

and produces: Screenshot 2020-11-19 125832

This:

  # Now samples from the approximate joint posterior distribution of all the parameters
  tic=proc.time()
  post.samp=inla.posterior.sample(n=1,m,selection=list(
    "(Intercept)"=1,
    "week"=1:nrow(m$summary.random$week),
    "ID1"=1:(nrow(m$summary.random$ID1)/2), #NB Only needs the first half as there are 2 components to BYM
    "IDtemp"=1:nrow(m$summary.random$IDtemp),
    "Anno"=4
  ),
  num.threads = round(parallel::detectCores()*.8),
  verbose=FALSE
  )
  toc=proc.time()-tic

Gives no error.

This:

# Formats simulations in a matrix (like BUGS would do)
sim=matrix(unlist(lapply(post.samp,function(x) x$latent[1:nrow(post.samp[[1]]$latent),])),ncol=nrow(post.samp[[1]]$latent),byrow=T)
colnames(sim)=rownames(post.samp[[1]]$latent)

# Now simulates from the posterior of the hyperparameters (to get sd for the 'Anno' effect)
sd.anno=sqrt(1/inla.hyperpar.sample(nsim,m,improve.marginals=TRUE)[,"Precision for Anno"])
# Then simulates from the predictive distribution for Anno=5
anno.2020=rnorm(nsim,0,sd.anno)
# Then substitutes the relevant column in 'sim'
sim[,grep("Anno",colnames(sim))]=anno.2020
file=paste0("Output/",Sex,"/posteriors",area,".Rdata")
save(sim,data,distr,file=file)

Gives an error but then it saves the file you may find in the following attached file:

Error in sim[, grep("Anno", colnames(sim))] = anno.2020 : 
  number of items to replace is not a multiple of replacement length
> file=paste0("Output/",Sex,"/posteriors",area,".Rdata")
> save(sim,data,distr,file=file)

posteriorsNordOvest.zip

Anyway I think there is a shortcut to solve this issue. With respect to the goal 2 specified at the end of my first comment:

  1. Our goal is to get age stratified excess mortality for Piedmont ( so we need at least regional resolution). I think they are in predItaly[["predictions_prov"]]: is it right?

Does the output one get from running the notebook as is here on github suffice? ( I tried my best to upload the output files here, but they are too big even after the best compression I could do).

giabaio commented 3 years ago

The error you get here is, I think, because you've only done 1 simulation from the joint posterior and so the dimension is not correct. To go to your point --- yes: when you've finished with the analysis and have all the results for all Italy, you can simply subset predItaly to retrieve all the provinces in Piemonte and they'll do. The files are huge and of course they can't be shared directly...

To the main point, I think that the posterior estimates are OK --- but we now need to debug why it gives you a problem in the prediction. I'd suggest, please, to do the kind of line-by-line analysis for make.predictions to see what the issue is on your machine? Let me know if I can help further!

giabaio commented 3 years ago

Also, if it helps we could filter the output from our original model to only include Piemonte and somehow make it available for you?

ClaudMor commented 3 years ago

Hello,

Yes, it would really help if you could filter the output from your original model to include Piemonte and make it available for us.

You may contact me on Twitter via DM ( my username is @ClaudioMoroni5 ) or email me at claudio.moroni@edu.unito.it . One option could be a shared drive.

I'll try to perform the line-by-line of make.predictions analysis that you mention as soon as possible. Thank you very much for your time and help!