Open ShazAlvi opened 3 years ago
Hi @ShazAlvi
It's certainly strange! Could you please run the mcmc case again with the --debug
option (or debug: True
in the yaml) and post here a chunk of the output? This will have a similar effect to what you did in get_new_sample_metropolis
, but a bit more detailed and should help us diagnose it.
Hi @JesusTorrado Thanks for your reply. The debug option is already set to True. Here is the chunk of the code that sets these options.
sampler:
mcmc:
measure_speeds: false
output_every: 10s
packages_path: /data2/cobaya_modules/
output: My_Test
force: true
debug: True
Hi @ShazAlvi
It doesn't seem that output you posted above for mcmc was generated with debug: True
: it prints way more information, in particular, per iteration, what quantities are passed to each part of the likelihood, and what the individual priors and likelihoods are worth.
I am new to Cobaya and the development of Likelihood in this framework. I have written my likelihood and the param file. The params of my file look like this,
When I run the param file with
evaluate
as a sampler, I get the following, seemingly reasonable, outputHowever, when I run with
mcmc
sampler, the sampler keeps running without accepting any model and the number of treid models keeps increasing. This is how the output looks like,I tried to print the likelihood evaluated in the file mcmc.py in the function,
def get_new_sample_metropolis(self):
and it shows that the likelihood is being evaluated to -inf. I guess that is why it cant accept any model.My question is: how can evaluate give me a decent value of the likelihood but
mcmc
sampler fails to get a value when both of them, I think, call the same functionlogposterior
with a set of cosmological parameters? A clue into how the calls betweenmcmc
andevaluate
differ will be helpful in trying to understand the source of the problem.