joshspeagle / dynesty

Dynamic Nested Sampling package for computing Bayesian posteriors and evidences
https://dynesty.readthedocs.io/
MIT License
347 stars 76 forks source link

Complicated issue for usage of arrays #150

Closed esadri21 closed 5 years ago

esadri21 commented 5 years ago

Dear J. Speagle. For testing the Dynesty, I just copied and pasted my Lambda CDM code form EMCEE into Dynesty. I used the Loglikas I used in EMCEE

I copied and pasted the relevant part here

def ant(z, O_D):             # this is the LCDM 
    return 1/((1 - O_D) * (1 + z)**3 + O_D)**(0.5)
################################## SN DATA
##Here is supernova data. ZSN, bare read form data file.
def dL(n, H0, O_D, M):   
    q_sn = quad(ant, 0, zsn[n], args = (O_D))[0]  
    h = 5 * log10((299000/H0) * (1 + zsn[n]) * q_sn) + 25  
    fn = (mu_b[n] - M - h)              
    return fn

def prior_transform(H0):
    return 30. * H0+ 50.

def SN(H0, O_D, M):  This is the chi^2 or loglike 
    f_list = [dL(i, H0, O_D, M) for i in range(40)]  # we have 40 data points
    rdag = [f_list]    
    rmat = [[f] for f in f_list] 
    m1 = np.dot(rdag,C)         
    m2 = np.dot(m1,rmat)              
    x2_SN = np.linalg.det(m2) 
    return x2_SN

the total loglike for different data will be like SN() + BAO() + ... But here we just have SN.

def loglik(H0): 
    O_D = 0.69
    M = -19.38    
    return  SN(H0,O_D,M) 

Now, This code is completely correct in EMCEE and Multinest but when I use it in Dynesty, I get many errors.

sampler = NestedSampler(loglike, prior_transform, 2)

I vectorized SN() function and in loglikeI used ().any(). it then reached the answer but wrong values. I am confused what should I do.

joshspeagle commented 5 years ago

There's not really enough information here for me to see what's going on, but given the setup my first question is that you have ndim=2 here but most of your functions seem to be just functions of 1 parameter (H0). Is that intended? Right now you're sampling 2 numbers, transforming them from [0, 1] -> [50, 80], feeding both of these into loglike -> SN -> dL, computing 2 values of h, and likewise returning 2 values for fn.

esadri21 commented 5 years ago

found the answer. Oh god. It was about log10and np.log10

Could you tell me why is very faster than sampler = NestedSampler(loglike, prior_transform, 3) this one? dsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=3)

Which one is more accurate? Of course Ii think I should use the multiprocessing, because for 5 dimensional models it lats for a loooooong time. I appreciate your attention

joshspeagle commented 5 years ago

The normal nested sampler in general is faster. You’re more than welcome to try to use multiprocessing; dynesty supports such efforts through a user-provided pool.