Closed dmbates closed 10 years ago
It is usual to select a distribution other than the Normal as the prior, so it would be useful to offer a wider selection of priors. Would an implementation involving a range of priors require a lot of work and would it be hard to achieve it in general?
Technically, penalized iteratively reweighted least squares (PIRLS) applies to cases where the logprior density of the coefficients is quadratic. In practice a quadratic approximation to the logprior could be used for non-normal priors. At times, however, trying to code up something like that may not be worth the effort compared to giving a general nonlinear optimizer an expression for the logposterior density.
Sorry if my question is naive as this is not my field of expertise; do you mean an algorithm for nonlinear optimization, such as the ones provided by NLopt for ex? How do quadratic approximation and nonlinear optimizers compare in terms of implementation (numerical accuracy, generality and stability of results, and speed of execution)?
Yes, I meant using a general nonlinear optimization algorithm such as those available in the NLopt package. Speed, generality, stability, etc. will depend on the dimension of the coefficient vector and on their covariance in the posterior distribution.
Would you have the time to code Bayes GLM with general logpriors using NLopt's optimization routines? It would take me longer than you to write this code due to my lack of familiarity with NLopt.
It may be a good idea to offer the additional functionality of using a set of priors from the Distributions package in order to avoid numerical optimization when possible - would this be a realistic possibility?
The coding of the prior is not the issue, it is whether the optimization of the logposterior density can be translated into an iteratively reweighted least squares calculation.
I see what you mean. Rather than using quadratic approximations to non-Normal log-priors in order to perform PIRLS calculations, I would prefer to pass an expression for the log-prior to non-linear optimizers. What would you prefer? If you agree on the latter approach, we could always add some benchmarks to see how well it works in practice.
This is issue is somewhat out of date, since it has been succeeded by the most recent developments towards the creation of the PGM
package, which will provide an implementation of probabilistic graphical models. For this reason, I will close this ticket for now.
How general would you want the specification of a Bayes GLM model to be? If the prior on the coefficients for the linear predictor is multivariate normal then the calculation of the estimates is straightforward. Just replace the iteratively reweighted least squares (IRLS) algorithm by a penalized iteratively reweighted least squares (PIRLS) algorithm. We do this when fitting GLMMs. If a more general specification of the logprior density is desired then other methods will be needed.