Closed erikbern closed 6 years ago
Lots of extra code so far, but will result in a net reduction once I'm done
Something is wacky with the convergence of the Gamma-Beta model, not sure what's up
Not very happy with how the Gamma-Beta model fitting works, will revisit in the future
Ok, was able to switch from L-BFGS-B to Nelder-Mead for Gamma, by doing a hacky variable transform to get rid of any bounds. Works really well and was a lot less code!
something is wacky with the confidence intervals sometimes when you have a small numbers of observations, will investigate
The bad news is i realized this approach is actually pretty dumb and won't work – you can't fit a prior distribution using MAP. I'm a Bayesian clown.
The good news is I think it's salvageable by factoring out the Beta construction outside of the optimization problem – I'm pretty sure the posterior as a function of c will match a Beta distribution. The other good news is a bunch of the code changes lead to much faster/robust optimization anyway.
Got it working. This is about 100x faster and more robust than using bootstrapping.
The only downside is that other parameters (k
and lambda
) are not fit to each bootstrap sample, so you don't know what the uncertainty is. I'm also not 100% sure if the posterior wrt c truly is a beta distribution, but just plotting the marginal probability distribution it definitely matches really well.
Instead of using bootstrapping to estimate uncertainty of c, just fit a Beta distribution directly
This is probably 10-100x faster although a few more lines of math (lots of
gammaln
)Will do the same thing for Weibull and Gamma and then remove the bootstrapping (and the old non-beta models). Will also remove a few other things like sharing parameters etc.