Closed seabbs closed 1 year ago
I think it's very useful to show the run time. In particular, I probably wouldn't want to use the latent variable model for super large data sets after looking at this if I wanted some quick results during an ongoing outbreak (just because it takes so long).
By the way, are we only getting divergent chains in those three cases?
By the way, are we only getting divergent chains in those three cases?
Yes. Well this is 1% or more of samples so I guess there may be some with a very few.
. I probably wouldn't want to use the latent variable model for super large data sets after looking at this if I wanted some quick results during an ongoing outbreak (just because it takes so long).
Potentially but if we think about most outbreaks there are a few hundred data points and only a few models to run. In that setting I think the greater accuracy is likely worth the run-time.
Yes. Well this is 1% or more of samples so I guess there may be some with a very few.
that's great. I thought something like latent variable model would have a difficult time converging... but guess not!
Potentially but if we think about most outbreaks there are a few hundred data points and only a few models to run. In that setting I think the greater accuracy is likely worth the run-time.
Yes, I definitely agree.
So it looks like the changes we made to address #34 and #36 have really improved the stability of inference. This means that virtually all models are now fitting "well" with those that aren't being fairly clear issues due to lack of data etc. It does mean that we have some thinking to do about if we need the diagnostic figure panel (see attached for its current state).
Some of this could of course be due to random variation in our subsampling so this may change if we need to update fits again.