Closed parksw3 closed 1 year ago
My current guess is they're padding problems... I tried running my own fits but I don't have access to the exact samples that you used to fit so I was unable to replicate the first problem for the short delay.
I think you can get these from data/scenarios
?
I was able to replicate the problem for the long delay, where zero padding is messing up the inference, even though I only have 5 out of 400 zeroes:
I'm super surprised it is such a dominant effect but I guess it makes sense.
This brings up an interesting point about what "ground truth" we should compare to. Comparing to raw mean and sd could make some of our fits worse than they are? But I think it makes more sense to compare to raw mean and sd. Not suggesting we should be comparing to logmean and logsd. Maybe something we need to discuss in the Discussion section.
Agree this is really really interesting. Also links to what people should be reporting when they estimate distributions (I would argue it should be the actual parameters of the distribution rather than mean sd etc but much more common to do the latter).
I think you can get these from
data/scenarios
?
I think this shows the whole simulation? Not the exact subsample you're looking at?
Also links to what people should be reporting when they estimate distributions (I would argue it should be the actual parameters of the distribution rather than mean sd etc but much more common to do the latter).
Good point. Lots to discuss in the paper.
This is now looking more in line with expectations and the problems outlined above appear mostly resolved. The truncation-only model is still somewhat problematic at longer delays + early in the outbreak though (I think this is likely just to be expected?).
The truncation and censoring model is still slightly biased (in both the latent and non-latent case) for the standard deviation across all delays.
I think this is now resolved so closing.
Some obvious issues:
We didn't have the first issue during our last run so maybe just a bad sample? We also had some problems with truncation before so need to fix them anyway.
My current guess is they're padding problems... I tried running my own fits but I don't have access to the exact samples that you used to fit so I was unable to replicate the first problem for the short delay. I was able to replicate the problem for the long delay, where zero padding is messing up the inference, even though I only have 5 out of 400 zeroes:
Again, getting rid of zeroes gets rid of the problem for the most part, although I'm still overestimating sd by quite a bit (but not as bad as before).
Not even covering the empirical truncated SD:
But actually, if we compare with log empirical mean and sd, our posteriors are good:
Compare this with:
where
exp(-0.25) = 0.77
. So if we're matching logmean and logsd so well, why is our SD estiamte so far off? Because of nonlinear effects... Jensen's inequality, essentially. If we take the empirical logmean and logsd and calculate the sd of the corresponding lognormal distribution, it matches up with waht we estimate.This brings up an interesting point about what "ground truth" we should compare to. Comparing to raw mean and sd could make some of our fits worse than they are? But I think it makes more sense to compare to raw mean and sd. Not suggesting we should be comparing to logmean and logsd. Maybe something we need to discuss in the Discussion section.