brinckmann / montepython_public

Public repository for the Monte Python Code
MIT License
93 stars 80 forks source link

Gaussian prior not working #332

Closed DinorahBarbosa closed 1 year ago

DinorahBarbosa commented 1 year ago

Hi all,

I'm working with fgas data and some likelihood code I wrote. I need to impose a gaussian prior in one of my nuisance parameters and I've done that by writing the following in the fgas_old.data file

fgas_old.use_nuisance = ['K_fg0'] fgas_old.K_fg0_prior_center = 0.90 fgas_old.K_fg0_prior_variance = 0.10

On my ini file I have given the following bounds

data.parameters['K_fg0'] = [0.9, 0., 2., 0.004, 1, 'nuisance']

However, when I analyze the chain results my best fit (and mean) are out of bounds with my gaussian prior: K0 ~1.5. I have also run a similar analysis without the prior and the results are essentially the same. So apparently my gaussian prior seems to have no effect on my analysis. I'm not sure what I could be doing wrong, is this the correct way to add a gaussian prior to the parameter?

Thanks in advance,

Dinorah

brinckmann commented 1 year ago

Hi Dinorah,

Did you make changes to the init file? You'd need something like in the "absolute_M" likelihood: K_fg0 = (data.mcmc_parameters["K_fg0"]["current"] data.mcmc_parameters["K_fg0"]["scale"]) loglkl = -0.5 (K_fg0 - self.K_fg0_prior_center) 2 / (self.K_fg0_prior_variance 2)

If you already have loglkl defined you can add the loglkl from your prior to it.

Best, Thejs

DinorahBarbosa commented 1 year ago

Hi Thejs,

I did add the line

K_fg0 = (data.mcmc_parameters["K_fg0"]["current"] * data.mcmc_parameters["K_fg0"]["scale"])

on my init file, but I did not do something like the loglkl you mentioned. My loglkl is defined like this


def loglkl(self, cosmo, data):

       chi2 = 0
       bg = cosmo.get_background()
       z = bg['z']
       rhob = bg['(.)rho_b']
       rhoc = bg['(.)rho_cdm']

       gamma_fg0 = (data.mcmc_parameters['gamma_fg0']['current']*
                                  data.mcmc_parameters['gamma_fg0']['scale'])

        K_fg0 = (data.mcmc_parameters['K_fg0']['current']*
                       data.mcmc_parameters['K_fg0']['scale'])

        for i in range(self.num_points):
             da = cosmo.angular_distance(self.z[i])
             rhob_theo = np.interp(1. / (1. + self.z[i]), 1. / (1. + z), rhob)
             rhoc_theo = np.interp(1. / (1. + self.z[i]), 1. / (1. + z), rhoc)
             theo =  K_fg0*gamma_fg0* rhob_theo / (rhob_theo + rhoc_theo) * (self.da_fid[i] / da)**(3. / 2.)
             chi2 += ((theo - self.data[i]) / self.error[i]) ** 2

     # return ln(L)
       lkl = - 0.5 * chi2

       return lkl

I see that this absolute_M likelihood is a separate likelihood also. Should I do the same with K0? Or can I still rewrite the loglkl (or another parameter) to include K0 prior center and variance?

Best,

Dinorah

brinckmann commented 1 year ago

Hi Dinorah,

I think it's cleaner to add a separate likelihood, but it shouldn't be strictly necessary, you could add the Gaussian expression to "lkl". But if you have a separate likelihood you can choose whether to include it (without adding additional flags controlling that behavior) and you can later see what chi squared values came from the prior and which from the other likelihood for a given parameter combination using run flag --display-each-chi2 possibly passing a bestfit file and -f 0 .

Best, Thejs

DinorahBarbosa commented 1 year ago

Hi Thejs,

It's working fine now, pheew. I wrote it as a separate code btw.

Thanks a lot!

Best, Dinorah