Open zonca opened 5 months ago
There is a way to restrict the number of samples (by setting the max_iter
kwarg in pymultinest.run
for example). But, I'd have to experiment with how few samples you can get away with while maintaining reasonably resolved posteriors. If we want to implement some sort of test of the results as suggested in #21, for example, we'll need to have decent posteriors.
It is in principle possible to trim the test dataset such that the number of model parameters is smaller. The run time of the code is dominated by the Cholesky decomposition which scales as the number of model parameters cubed. We've talked about this in the past, but at the same time, this dataset and model size is already greatly reduced from what an analysis of real (not simulated) data from an interferometer will require.
I think you could set max_iter
to have the test run for 30 min, then you put a very easy test on the posterior, we are just looking that the software is not wrong by orders of magnitude.
It is a test on the software, not a scientific-level test you would do for a paper.
we are just looking that the software is not wrong by orders of magnitude. It is a test on the software, not a scientific-level test you would do for a paper.
Agreed - even a test that runs through the whole pipeline with extremely minimal parameters like maxiter=2
would be helpful for an initial check. Just to verify that everything is installed correctly, it can find the GPU, etc.
Running the example on a A100 took almost 4 hours.
Do you think it would be possible to add a limit on the sampling so that it just runs for 30 min or so? then you can explain how to change that parameter to run until completion.