Closed gabrielastro closed 9 months ago
I think that all the species
-related points may have been addressed in the meanwhile.
For setting some of the MultiNest/UltraNest options, you can use the kwargs_multinest
and kwargs_ultranest
parameters in FitModel
. You can have a look at the documentation pages of those packages for more details.
To look at the intermediate sampling results, you can have a look at the output folders from MultiNest/UltraNest.
In general, I would suggest to not add all data at once in case of a high SNR spectrum with broad wavelength range. Try to increase stepwise the complexity of the fit. Typically a fit shouldn't take longer than several minutes.
Point 8: if you leave out a parameter from bounds, then it will automatically set the available range from the grid in case the parameter is mandatory. So in this case I would just not include 'feh'
. If you do want to check if the parameter is needed, then you can use get_parameters()
or get_points()
of ReadModel
.
Thanks for your helpful tips!
For the documentation of normal_prior
of FitModel
:
"The parameter is not used if the argument is set to None
."
→
"A linear flat prior betwen the natural bounds is used for
the parameters for which a prior was not set explicitly through bounds
"
or something like this would be more accurate.
Indeed, Ctrl+C
will not work for PyMultiNest
, as Johannes Buchner explained. Avoiding mistakes is better :wink:.
Thanks!
Indeed, the kwargs_*
you added answer this!
Ok, starting with small complexity and increasing sounds good.
I do not get the warning anymore, so, thank you.
If I am not mistaken, the priors (including parameter ranges) used for the fits are not stored in the database. Might this be a good idea, since they do make a difference? While at it, you could maybe add the number of likelihood evaluations or CPU time as a reminder of how expensive a good run was :).
Thanks for pointing to those functions! Sorry for not being clear but I meant that I accidentally set a parameter that is not one of the grid parameters. This could either get ignored or, better, give a single error and stop, instead of keeping on printing the error message. But here again, the best solution is not to pass accidentally unneeded parameters :wink:.
Consider it done! The priors are stored in the database as the bounds
and normal_prior
groups of the sampling results.
Excellent! Thank you. I will not wait for the next run to confirm it but thank you already :wink:.
In the documentation of
fit.run_ultranest
, aboutprior
:or something like this to make it clearer.
Is it possible to terminate the run early without losing all the data? It seems that Ctrl+C terminates the run but also corrupts the database because it remains open and the user cannot close it by hand. For MultiNest, Ctrl+C (most likely?) does not stop the run (as mentioned in #76). It would be nice to have for both UltraNest and MultiNest the possibility of graciously stopping without corrupting the database (and as a bonus, maybe even saving the points up to now, but this is really extra). I made a mistake starting a run with too many live points and could not correct this, only wait for it to finish…
If 2. is possible, while the run is going, is there an easy way of plotting (in physical-parameter space) the distribution of points found up to now, if this is meaningful to estimate whether the run is actually finished or still needs a long time? Especially when testing, this would be practical. (As you probably know, "UltraNest has a visualisation to observe the current live points" but I am guessing this is deep in the package. For MultiNest, I cannot make a sense of how close it is to be finished, and just looking by hand at the distribution of points while the simulation is running should give a good idea.)
I got the warning
How can I "use a stepsampler" or "set frac_remain"? Or are these just symptons and for typical fitting situations, it means I am doing something wrong at a deeper level?
Hoping to have a quick, rough run, I tried
min_num_live_points=100
infit.run_ultranest()
but after "the main part" came the message:and it wrote something about increasing the number of live points and it started runnning again… So in the end it took what felt like a similar amount of time as with 500 live points. Is there then (typically/often) no advantage to reducing the number of live points? And more generally, is it possible to do somehow a "quick run" yielding only an approximate best fit and errorbars, to see whether the model can match at all?
I get the following warnings (pointing them out here in case they are easy to fix):
[…]/species/fit/fit_model.py:1887: InstrumentationWarning: instrumentor did not find the target function -- not typechecking species.fit.fit_model.FitModel.run_multinest..lnlike_multinest
KeyError: 'feh' Exception ignored on calling ctypes callback function: <function run..loglike at 0x7fa2440c0f70>
Traceback (most recent call last):
File "[…]/.local/lib/python3.9/site-packages/pymultinest/run.py", line 228, in loglike
return LogLikelihood(cube, ndim, nparams)
File "[…]/species/fit/fit_model.py", line 1906, in lnlike_multinest
mpi_rank = MPI.COMM_WORLD.Get_rank()
File "[…]/species/fit/fit_model.py", line 1161, in lnlike_func
~$ head multinest/.txt 0.200000000000000048E-02 -0.139057853353940903-308 0.101724320650100708E+04 0.413902401924133301E+01 0.246439111232757568E+01 0.199086349010467556E+01 0.156033434761241327E+02