Closed carsten-j closed 4 days ago
I am looking for the best way to return not just a posterior sample distribution but also the mean vector and covariance matrix of the Gaussian distribution. Any suggestion for this. So far my only idea is to add another section to the inferenceData returned containing this information. Thoughts on this?
I'm not sure the InferenceData is the best place to put it. We should copy whatever we do with Variational Inference
On Sun, 2 Jun 2024, 13:05 Carsten Jørgensen, @.***> wrote:
I am looking for the best way to return not just a posterior sample distribution but also the mean vector and covariance matrix of the Gaussian distribution. Any suggestion for this. So far my only idea is to add another section to the inferenceData returned containing this information. Thoughts on this?
— Reply to this email directly, view it on GitHub https://github.com/pymc-devs/pymc-experimental/pull/345#issuecomment-2143803800, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAACCUM45EIPXABROOJPVCTZFL353AVCNFSM6AAAAABIT3F2F2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBTHAYDGOBQGA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Historically, Inferencedata has been focused on mcmc. But we have discussed a few times extend it to better handle other inference methods, like SMC or variational methods. It just that there has not been enough momentum to agree and implement and schema that works for those methods.
@zaxtax and @aloctavodia are you saying that I should not return inferencedata at all or just not return the gaussian mean and covariance in the inferencedata object? I am new to both PYMC and Bayesian statistics so I do not know the history of this package. Best, Carsten
Oh, it's more that we haven't decided how to handle this within the library. Don't treat this as a blocker, though we should raise it for discussion more broadly
On Sun, 2 Jun 2024, 19:04 Carsten Jørgensen, @.***> wrote:
@zaxtax https://github.com/zaxtax and @aloctavodia https://github.com/aloctavodia are you saying that I should not return inferencedata at all or just not return the gaussian mean and covariance in the inferencedata object? I am new to both PYMC and Bayesian statistics so I do not know the history of this package. Best, Carsten
— Reply to this email directly, view it on GitHub https://github.com/pymc-devs/pymc-experimental/pull/345#issuecomment-2143944495, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAACCUKGG7KIW4SVLMG6ZLLZFNGB7AVCNFSM6AAAAABIT3F2F2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBTHE2DINBZGU . You are receiving this because you were mentioned.Message ID: @.***>
CC @ferrine
Oh, it's more that we haven't decided how to handle this within the library. Don't treat this as a blocker, though we should raise it for discussion more broadly
exactly, just saying that if necessary InferenceData can be extended.
Suggestion, include two groups in the returned inferencedata:
fit
group that includes the mean and covariance of the laplace fit, and posterior
group that includes draws from this fit, and has all the bells and whistles like dimensions, deterministics, etc... Include the default extra groups like observed and constant data. This will look just like a fit from mcmc sampling. This can be disabled by the user by setting draws = 0
We could even try different fits from distinct initialization points (optionally) and save those as distinct "chains" in the fit
and corresponding posterior
groups. Although usually multiple initialization are used with the goal of finding the best fit, they could still be useful to detect multi-modality / pervasiveness of local optima.
@carsten-j PR looks great! I left some comment above
Thanks you @ricardoV94 and @twiecki for the review comments. I believe that all of them expect one has been fixed. I have not figured out how to use remove_value_transforms
. I tried to browse through PYMC source code but that did not really help.
Thanks you @ricardoV94 and @twiecki for the review comments. I believe that all of them expect one has been fixed. I have not figured out how to use
remove_value_transforms
. I tried to browse through PYMC source code but that did not really help.
The docs contains code example: https://www.pymc.io/projects/docs/en/stable/api/model/generated/pymc.model.transform.conditioning.remove_value_transforms.html
I should have mentioned that I did read the doc and looked at the example. But I have not been able to figure out how to apply it to my case. I will try again ...
To be able to use it inside the model context, it will need this change to get merged first: https://github.com/pymc-devs/pymc/pull/7352
But you should be able to already test by doing the object way with pm.fit(..., model=model)
outside of the model context
@ricardoV94 I figured out how to replace the for loop with remove_value_transforms. Is the PR ready for merge or are there additional review comments?
@ricardoV94 it does work but I have updated the code according to your comment. It makes sense and I am still learning about PYMC :-)
@ricardoV94 tests are failing but I do not see any relationship between the failing tests and this new Laplace feature. It there a general problem with tests?
@ricardoV94 tests are still failing. I am not sure what to do here as I do not see why these tests should fail due to my PR. Please advice.
@carsten-j I can confirm those failures are unrelated to your changes, and related to changes in PyMC or other dependencies. We'll have to fix them in a separate PR
Thanks for clarifying @ricardoV94. What is next step from here. Do we need approval for additional maintainers or can the PR be merged now?
@carsten-j we would fix the tests elsewhere, then update this PR on top of the fixes, and if the tests are passing we can merge, unless any of the reviewers blocks it / requests more changes. You don't need to do anything :)
Looks good. Once the tests pass, I think it's good to merge
@carsten-j tests are no longer failing in main. You can rebase/merge into your branch
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
@ricardoV94 I have rebased the laplace branch but it looks like someone needs to approve Github worksflows.
Looks like there are still a few failing tests, but once those pass this is probably good to merge
@zaxtax failing test has been fixed. Can you approve the waiting workflow?
@zaxtax, all tests passed. Are you also able to merge the PR? Thanks.
Congrats @carsten-j, this is a big one!
Thank you @twiecki. Really happy to contribute and thanks to all those that helped. After the summer I will try to work on documentation for building and running locally. I took me some time to figure out how this works!
Congrats @carsten-j this is really neat!
Brilliant work @carsten-j . Hope to see you contribute to PyMC again!
This is an early version of q quadratic approximation implementation that I have developed while reading Statistical Rethinking by Richard McElreath.
There is a short discussion about this in the issue and maybe @theorashid can help with feedback of this draft PR.
This work is partly based on the Python package pymc3-quap but pymc3-quap is based on PYMC3 and a lot happend bewteen version 3 and 5 of PYMC. Optimizers works better when provided with a good initial guess and hence a (optional) starting point has been added to function arguments. Please see Github for a discussion about the differences between PYMC version 3 and 5 for computing the Hessian.