Ciela-Institute / caustics

A gravitational lensing simulator for the machine learning era.
https://caustics.readthedocs.io
MIT License
27 stars 12 forks source link

Enhancing Statistical Inference Examples and Sampler Analysis #250

Open andigu opened 2 weeks ago

andigu commented 2 weeks ago

The integration of the package with PyTorch and its surrounding ecosystem is impressive, making it highly accessible to the community. However, the paper could benefit from including more examples that leverage this compatibility, particularly in the context of statistical inference with Pyro. For instance, a more detailed analysis of the NUTS sampler’s convergence would be valuable. The current example uses just 100 samples, but it is not clear if the sampling has fully converged. Given the speed of sampling, including more samples and evaluating convergence with standard metrics like Rhat (https://mc-stan.org/rstan/reference/Rhat.html) would strengthen the results.

Additionally, experimenting with different samplers, such as HMC, could add depth to the analysis. Providing recommendations on which samplers tend to perform well with the package, or if different samplers need to be tried on a case-by-case basis, would greatly benefit users. This could enhance the practical utility of the package and help guide its adoption in a wider range of applications.

ConnorStoneAstro commented 2 weeks ago

Hi @andigu , that's a good idea to get into a bit of the difference between samplers that caustics can use. To me, this seems most appropriate to put with the tutorial example on the website using NUTS, rather than the JOSS paper. Using Rhat and autocorrelation length we can show how NUTS is able to draw fully independent samples, so 100 is plenty for demo purposes. Comparing with emcee and HMC will make this clear.

I'll comment here when we have some progress on this.