brinckmann / montepython_public

Public repository for the Monte Python Code
MIT License
92 stars 78 forks source link

Root finding generating trouble #360

Open fasensior opened 5 months ago

fasensior commented 5 months ago

I have run into some problems while executing remove_bao to smooth the plot with my code being basically the function remove_bao + the input file + some plots. The problem is similar to the one in #195: error: (m>k) failed for hidden m: fpcurf0:m=1

The data I have been using is CLASS generated data for the power spectrum.

The main problem seems to be while detecting the roots in the bao feature. Investigating a little bit I have found that the roots method of the function sometimes misses roots (see the example in https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.roots.html#scipy.interpolate.UnivariateSpline.roots). I have been able to reproduce some figures from some papers with the changes suggested in the link, to have

` def remove_bao(k_in, pk_in):

De-wiggling routine by Mario Ballardini

# This k range has to contain the BAO features:
k_ref=[8e-3, 4e-1]

# Spline all (log-log) points outside k_ref range:
idxs = np.where(np.logical_or(k_in <= k_ref[0], k_in >= k_ref[1]))
_pk_smooth = scipy.interpolate.UnivariateSpline( np.log(k_in[idxs]),
                                                 np.log(pk_in[idxs]), k=3, s=0 )
pk_smooth = lambda x: np.exp(_pk_smooth(np.log(x)))

# Find second derivative of each spline:
fwiggle = scipy.interpolate.UnivariateSpline(k_in, pk_in / pk_smooth(k_in), k=3, s=0)
derivs = np.array([fwiggle.derivatives(_k) for _k in k_in]).T
d2 = scipy.interpolate.splrep(k_in, derivs[2], k=3, s=1.0)

# Find maxima and minima of the gradient (zeros of 2nd deriv.), then put a
# low-order spline through zeros to subtract smooth trend from wiggles fn.
wzeros = scipy.interpolate.PPoly.from_spline(d2).roots(extrapolate=True)
wzeros = wzeros[np.where(np.logical_and(wzeros >= k_ref[0], wzeros <= k_ref[1]))]
wzeros = np.concatenate((wzeros, [k_ref[1],]))
wtrend = scipy.interpolate.UnivariateSpline(wzeros, fwiggle(wzeros), k=3, s=0)

# Construct smooth no-BAO:
idxs = np.where(np.logical_and(k_in > k_ref[0], k_in < k_ref[1]))
pk_nobao = pk_smooth(k_in)
pk_nobao[idxs] *= wtrend(k_in[idxs])

# Construct interpolating functions:
ipk = scipy.interpolate.interp1d( k_in, pk_nobao, kind='linear',
                                  bounds_error=False, fill_value=0. )

pk_nobao = ipk(k_in)
return pk_nobao

`

Idk if this is the appropriate way to let you know but hope this is helpful!!

brinckmann commented 5 months ago

Hi,

Ah, I remember the P(k) de-wiggling algorithm issues, I worked on that many years ago and I agree they were never the most stable. I'm sorry you had problems with it. Luckily more recent P(k) analyses seem to have moved beyond this as a modelling method, but that also means the likelihood hasn't really been kept up to date (especially with the evolution of python/scipy) as we haven't seen any new data using a similar format. I seem to recall there are a few algorithms out there that can do it, but I don't know which work best, and it is entirely possible some of the other ones work better. Thank you for reporting back on an alternative way to do this, that could prove useful if anyone needs to revisit these old likelihoods or have new likelihoods that use de-wiggling. We always appreciate people reporting back on things like this, it's very helpful.

Best, Thejs