Closed LoganAMorrison closed 4 years ago
hazma.relic_density.relic_density
Okay, I think I've done the best I can without cythonizing
. We can now compute the relic densities of any model. As long as the model implements model.mx
and
self.annihilation_cross_sections
(which must return a dictionary with key total
), then the function relic_density
will compute the relic density. Alternatively, the model can just implement a function called model.thermal_cross_section
and it will still work (still needs model.mx
though.)
I've implemented the calculation in two different ways:
The semi-analytic calculation takes about 0.5s while the full integration of the Boltzmann equation takes ~10 seconds (yikes...) They both give very similar answers so I made the default use the semi-analytical calculation.
I added specific functions in the scalar and vector mediator model for the thermal cross-sections which only differer by the built-in methods by adding specific breakpoints for the integration.
We can make this calculation even faster if we cythonize
the annihilation cross-section functions. I'm sure we could bring the calculation down to ms levels.
For the scalar and vector mediator models, you can use:
from hazma.scalar_mediator import HiggsPortal
from hazma.vector_mediator import KineticMixing
from hazma.relic_density import relic_density
hp = HiggsPortal(100.0, 123.5, 1.0, 1e-3)
km = KineticMixing(100.0, 125.2, 1.0, 1e-4)
# semi-analytical
relic_density(hp) # 0.11847057586537633
relic_density(km) # 0.11922204452307583
# integration of Boltzmann
relic_density(hp, semi_analytic=False) # 0.12104612783073646
relic_density(km, semi_analytic=False) # 0.13167106960636926
One can also make their own model to compute the relic density:
from hazma.relic_density import relic_density
class MyModel(object):
def __init__(self, mx, sigmav):
self.mx = mx
self.sigmav = sigmav
def thermal_cross_section(self, x):
"""x = mx / T"""
return self.sigmav
# These give the correct relic density for DM
mx = 104.74522360006331e3 # ~104 GeV in MeV
sigmav = 1.7597967261428258e-15
model = MyModel(mx, sigmav)
relic_density(model, semi_analytic=True) # 0.11444106905299978
relic_density(model, semi_analytic=False) # 0.11963483643866808
These computations are significally faster since no thermal integrals need to be computed (4 ms for semi-analytic and 40 ms for solving Boltzmann equation.)
I added tests verifying that both the semi-analytical method and Botlzman integration work using the following points:
Each of these should give a relic density of Omega*h^2 ~ 0.119.
simps
and similar functions don't work?Lastly, I had a lot of issues trying to use different quadrature rules to compute the thermal cross-section integrals. I tried SciPy's simps
, romb
, and quadrature
as well as NumPy's laggauss
fo Gauss-Laguerre quadrature. All of these gave an answer that was two orders of magnitude larger than what SciPy's quad gave. I have no idea what the issue is. Add the end, I just used SciPy's quad. @adam-coogan any idea why there would be such a large discrepancy? Here is a MWE of the discrepancy:
from hazma.scalar_mediator import HiggsPortal
from scipy.integrate import simps, quad
from scipy.special import kn, k1
def thermal_cross_section(x, model, use_quad=True):
# need this to avoid divide by zero warnings
if x > 300.0:
return 0.0
pf = x / (2.0 * kn(2, x))**2
def integrand(z):
sig = model.annihilation_cross_sections(model.mx * z)['total']
kernal = z**2 * (z**2 - 4.0) * k1(x * z)
return sig * kernal
if use_quad:
return pf * quad(integrand, 2.0, np.inf)[0]
else:
zs = np.linspace(2.0, 200.0, 1000)
integrands = integrand(zs)
return pf * simps(integrands, zs)
# This doesn't work
hp = HiggsPortal(100.0, 1000.0, 1.0, 1e-3)
print(thermal_cross_section(1.0, hp, use_quad=False)) # 4.972292358963628e-15
print(thermal_cross_section(1.0, hp, use_quad=True)) # 5.252226660601945e-17
# but this works?
hp = HiggsPortal(100.0, 150.0, 1.0, 1e-3)
print(thermal_cross_section(1.0, hp, use_quad=False)) # 2.469852823226098e-07
print(thermal_cross_section(1.0, hp, use_quad=True)) # 2.519329618702442e-07
On second thought, I wonder if it's quad that's wrong? Maybe we should come up with an example where we can get an analytic result and compare. But I have a feeling that it's the resonances/thresholds that give the issue. Here is a plot of what the integrand looks like for the scalar:
from hazma.scalar_mediator import HiggsPortal
from scipy.integrate import simps, quad
from scipy.special import k1
import matplotlib.pyplot as plt
def thermal_cross_section_integrand(z, x, model):
sig = model.annihilation_cross_sections(model.mx * z)['total']
kernal = z**2 * (z**2 - 4.0) * k1(x * z)
return sig * kernal
hp = HiggsPortal(100.0, 1000.0, 1.0, 1e-3)
x = 1.0
zs = np.linspace(2.0, 80.0, 1000)
integrands = thermal_cross_section_integrand(zs, x, hp)
plt.figure(dpi=100)
plt.plot(zs, integrands, lw=2)
plt.vlines(hp.ms / hp.mx, 0.0, 1e-10, ls='--', label='Resonance')
plt.vlines(2.0 * hp.ms / hp.mx, 0.0, 1e-10, ls='-.', label='Threshold')
plt.yscale('log')
plt.ylabel('Thermal Cross Section Integrand', fontsize=14)
plt.xlabel(r'$z = E_{\mathrm{CM}} / m_{\mathrm{DM}}$', fontsize=14)
plt.ylim(1e-35, 1e-10)
plt.legend(fontsize=14)
thermal_cross_section_integrand.pdf
@adam-coogan wanna review some of the code to make sure things look alright? Also, do you have any other suggestions before a merge can take place?
I ended up cythonizing
the scalar-mediator cross-sections and achieved about a factor of 50-70X speedup in both the semi-analytic and full Boltzmann solving modes. It takes about 10 ms to compute a relic density for the semi-analytic mode and 150ms for the full Boltzmann solving mode.
I'm going to cythonize
the vector mediator too. It would be nice to find a different way to speed this calculation up though.
Nice work, @LoganAMorrison!
@adam-coogan wanna review some of the code to make sure things look alright? Also, do you have any other suggestions before a merge can take place?
I will take a look at the code in the next week or so, pretty busy with organizing a conference right now. One design point -- I think you've written this in a very Julia style. I agree that the relic abundance calculation can be separate from Hazma (and should eventually be its own module), I feel like we should have theory
wrap it in a method so you can do eg hp.relic_abundance()
, like with the other stuff in Hazma. What do you think?
I added tests verifying that both the semi-analytical method and Botlzman integration work using the following points:
I just checked and these cross sections are around 2e-26 cm^3/s
, which is great!
Lastly, I had a lot of issues trying to use different quadrature rules to compute the thermal cross-section integrals. I tried SciPy's simps, romb, and quadrature as well as NumPy's laggauss fo Gauss-Laguerre quadrature. All of these gave an answer that was two orders of magnitude larger than what SciPy's quad gave. I have no idea what the issue is. Add the end, I just used SciPy's quad. @adam-coogan any idea why there would be such a large discrepancy?
The difference almost certainly comes from setting zs = np.linspace(2.0, 200.0, 1000)
. As your plot shows, there is a very narrow resonance and then a sharp threshold. This would look much more dramatic with linear scaling for the y axis.
Methods like simps
can give you comparable results much faster than quad
if you properly choose the points at which to sample. We need to sample using log-spaced points on either side of the resonance and after the threshold. This should be pretty straightforward since we know the resonance occurs at e_cm = m_mediator
and the threshold at e_cm = 2 * m_mediator
. The main subtleties are tuning the number of points at which you sample and handling scenarios like m_med < m_dm (2*m_dm)
, in which case you get no threshold (resonance).
It would be good to try to write this as a cross-check of quad
. Also, when we call quad
, we need to pass in the points e_cm = m_mediator
and e_cm = 2 * m_mediator
to ensure the integrator doesn't miss these points.
It would be nice to find a different way to speed this calculation up though.
I really don't know how to solve this in python. I don't think there are tools for generating pxd
files from python ones, so users would have to cythonize their code themselves... Numba could be an alternative.
This is definitely more motivation to write Hazma.jl.
@adam-coogan shall we close this and merge?
Yeah, let's. I don't think we can improve this any further until we switch to Hazma.jl.
We should add submodules inside the scalar and vector mediator classes to compute the relic density of the dark matter in these frameworks.