Open oharar opened 4 years ago
I guess there may be some errors in your line of code. I was having a look at this and wanted to be sure I was on the same page as you are...
sigma.site ~ dgamma(1, 1) # site specific variance sigma.species ~ dgamma(1, 1) #species specific variance
#Site-specific effect
for(site.tag in 1:n.sites){
beta.site[site.tag] ~ dnorm(0, var=sigma.site)
beta.siteSTAR[site.tag] <- beta.site[site.tag] - mean(beta.site[1:n.sites])
}
#Species-specific effect
for(spe.tag in 1:n.species){
beta.species[spe.tag] ~ dnorm(0, var=sigma.species)
beta.speciesSTAR[spe.tag] <- beta.species[spe.tag] - mean(beta.species[1:n.species])
}
#Dummy intercept
#Intercept: this has a prior variance of 3.
alpha ~ dnorm(0, var=1) # Dummy intercept
alphaSTAR <- alpha + mean(beta.site[1:n.sites]) + mean(beta.species[1:n.species])
for(site.tag in 1:n.sites){
for(spe.tag in 1:n.species){
mu[site.tag,spe.tag] <- alphaSTAR + beta.siteSTAR[site.tag] + beta.speciesSTAR[spe.tag]
}
}
You do actually need mu.site etc. in the code as well (but don't put too wide priors on them). The idea is that you don't care if they are not identifiable, because you only monitor the identifiable nodes. By letting the means float around, you can improve mixing a lot. It's a weird trick, but it works.
Does it then mean that I will need mu.site and mu.species, but it actually has no connection to any of the other parameters in the model?
I now see the connection. I missed out on a few details.
But i sure guess the final code should look like this: for(site.tag in 1:n.sites){ for(spe.tag in 1:n.species){ mu[site.tag,spe.tag] <- alphaSTAR + beta.siteSTAR[site.tag] + beta.speciesSTAR[spe.tag] } }
since alphaSTAR + beta.siteSTAR[site.tag] + beta.speciesSTAR[spe.tag] are the parameterization we intend to use. @oharar
No, both parameterisations give the same mu[], but the one that goes into the likelihood is the one with the unidentifiable parameters: that's what the estimation sees, and is what is more flexible. But for reporting the results we use the identifiable versions.
Okay... I now understand it. Thank you very much for the clarification. @oharar
for(site.tag in 1:n.sites){ beta.site[site.tag] ~ dnorm(mu.site, var=sigma.site) beta.siteSTAR[site.tag] <- beta.siteSTAR[site.tag] - mean(beta.site[]) }
for(spe.tag in 1:n.species){ beta.species[spe.tag] ~ dnorm(mu.species, var=sigma.species) beta.speciesSTAR[spe.tag] <- beta.speciesSTAR[spe.tag] - mean(beta.species[]) }
OR
for(site.tag in 1:n.sites){ beta.site[site.tag] ~ dnorm(0, var=sigma.site) beta.siteSTAR[site.tag] <- beta.site[site.tag] - mean(beta.site[1:n.sites]) }
for(spe.tag in 1:n.species){ beta.species[spe.tag] ~ dnorm(0, var=sigma.species) beta.speciesSTAR[spe.tag] <- beta.species[spe.tag] - mean(beta.species[1:n.species]) }
I have tried the latter and they mix well except for alpha, mu.site, m.species. I tried the former and it won't work. I wanted to find out if the former you wrote should actually be what I should be using. (Error message: Error: In building model, each of the following nodes has itself as a parent node: beta.siteSTAR[1])
@oharar
The first one should have beta.siteSTAR[site.tag] <- beta.site[site.tag] - mean(beta.site[]) (read the error message!)
Like you stated... mu.site and mu.species can be unidentifiable, but I expect alpha to be. But that is not the case. I will send you an email with the convergence plots. Moreover, sigma.site and sigma.species are way underestimated.
I am suspecting that the sum of the lambdas make it difficult for the model to correctly estimate the alpha and the variance in the beta.species and beta.site. I have been looking for alternatives without the summation of the lambdas, but I think there is none found yet.
@oharar
In the Nimble code try this. The xSTAR variables are identifiable, with the beta's having a mean of zero forced. It can improve the mixing of the random effects, although it's possible that Nimble is so intelligent that it doesn't help.
Real intercept
}