Closed rosskempner closed 3 years ago
Yes, I think this is precisely the point since the truncation does not factor in the shift at the moment.
I found a bug in the Stan code generation related to truncated shifted-lognormal models. Not sure if that fixes the pp_check problem though, but the prediction code underlying pp_check looks correct to me.
If it does not fix the problem, can you please provide a minimal reproducible example?
I did not hear back from you in a while, so I am closing this issue but please feel free to reopen it if the problem remains despite my fix.
based on a prior predictive check using pp_check(brms_fit), it does not look like the shifted lognormal, when using distributional coding over ndt, truncates properly under certain circumstances which I will describe below.
First, here is the code used to make the model and run the prior predictive check:
fit0 <- brm(formula = bf(formula = reaction_time | trunc(ub = 500) ~ 1, ndt ~ 1 + bigram_ideal_first_surprisal + block_num ), data = experiment_df, family = shifted_lognormal(), prior = c( set_prior("normal(-2,2)", class = "Intercept"), set_prior("normal(5.5,0.01)", class = "Intercept", dpar = "ndt"), set_prior("normal(0,0.01)", class = "b", coef = "bigram_ideal_first_surprisal", dpar = "ndt"), set_prior("normal(-0.05,0.001)", class = "b", coef = "block_num", dpar = "ndt")
sample_prior = "only" )
pp_check(fit0) Interestingly, when make the priors on the inputs to ndt very small, I get a clear truncation at 500 in the pp_check() output, but when the priors on the inputs to ndt are big, then truncation fails.