brinckmann / montepython_public

Public repository for the Monte Python Code
MIT License
93 stars 79 forks source link

Best Convergence criterion? #67

Closed ClaudioNahmad closed 1 year ago

ClaudioNahmad commented 5 years ago

Hello, i have a question

I have chains that converged correctly, i.e. R-1<0.05 or even R-1<<0.05, but the acceptance rate is pretty low, for example acc.rate~0.005. I executed "long" chains (around 950,000 steps with 8 processes) so i was hoping the chains would converge with a decent acceptance rate. The documentation suggests an acceptance rate between 0.2 and 0.3 for a good exploration, is this correct?

My question is the following:

If i have a good convergence criterion but a "bad" acceptance rate, can the chain results be considered conclusive? how good is good when talking about acceptance rate?

NB. I only have this problem when running chains with Planck results (TT+lowP), whenever i run with BAO or JLA or both, the acceptance rates behave "well" (between 0.2 and 0.3 every time)

Thank you in advance!

brinckmann commented 5 years ago

Hi Claudio,

Can you tell me which flags you use to run MontePython? If you're using the newest version (3.1) it should update the covariance matrix and jumping factor automatically with the flags --update # (enabled and set to 50 by default) and --superupdate (disabled by default, recommended 20) respectively. There are of course exceptions where the code is not able to obtain a better acceptance rate, but these are rare and should not happen for common likelihoods or models. Can you also tell me which model you are running?

Best, Thejs

ClaudioNahmad commented 5 years ago

Hello Thejs,

Thank you for your response. Im running MP version 3.1.0 with --superupdate 20, all the other are the standard flags (-o --conf -p and -N), im running on a cluster using mpiexec.hydra.

The model is an extension of the CPL parameterization, a power series in which w(z)= [w0+w1z+w2(z^2)] / [(1+z)^2]

There are two types of runs i am doing: a) bao_boss_dr12 + jla likelihoods b) bao_boss_dr12 + jla + planck highl + planck lowl

The runs in which i use planck likelihoods are the ones in which the acceptance rate is incredibly low (~0.005), even in cases when i just run Lambda-CDM model.

Thank you for your help.

brinckmann commented 5 years ago

Hi Claudio,

Would you mind packing your chains directory as a zip or tar file and sending it to the email listed on my github page? I can have a look and let you know if I have any ideas. As for sampling fidelity, as good rule of thumb is a lower than recommended acceptance rate is OK (just inefficient), but a higher one is bad and can bias your results. However, there may be cases where the parameter space is not properly explored by Metropolis Hastings (e.g. multimodal or complicated non-Gaussian posterior distributions) and a MultiNest or PolyChord sampling run is more appropriate.

Best, Thejs

ClaudioNahmad commented 5 years ago

Of course.

Ive just sent them to you.

Claudio