Closed ShazAlvi closed 1 year ago
Your runs are probably diverging due to poor start guesses and/or poor starting covmat and/or inappropriate parameter boundaries. I'm going to guess the latter. I'd start with the latter and impose lower bounds on w0_fld and wa_fld. I don't quite remember off the top of my head what good values for wa_fld is, but at least setting w0_fld > -3.5 should be good.
Best, Thejs
I am having a consistent problem running dark energy models (w0LCDM, w0waLCDM) with Planck only likelihood. When the chains are run with the covariance update option (default number for the flag --update) the chains stop updating midway. By this, I mean that the run_job.log file in SLURM and the chain files are no longer updated while the job continues. This does not happen when I set the --update flag off by passing --update 0. When updating option is set off, the chains run all the way through the duration of the job but when I restart the chains from the Covmat of the previous run, the chains stop again despite the setting of the update option. So, my question is: Under what circumstances can the chains stop updating like this in any likelihood?
My priors are the following,