me-manu / ebltable

Python package to read in tabulated photon densities and gamma-ray opacities
BSD 3-Clause "New" or "Revised" License
13 stars 12 forks source link

Energy units problem #7

Closed R-Grau closed 1 month ago

R-Grau commented 8 months ago

In the file tau_from_model.py in some comments and the name of the variable EGeV it tells the user to input the energies in GeV when the correct unit to use is TeV. I have not checked if this also happens in other files, but the notebook examples use TeV and they get the correct value, while if I use GeV I get wrong values.

me-manu commented 8 months ago

Dear @R-Grau, Could you point to the exact lines in the code where this is a problem and / or post an example where you run into this issue? Thanks!

R-Grau commented 8 months ago

Dear @me-manu, Sorry if I did not explain myself enough, this is the first issue I make. when defining OpticalDepth in line 28 and in the comment in lines 43 and 44. After that, every time you define the variable EGeV.

class OptDepth(GridInterpolator):
    """
    Class to calculate attenuation of gamma-rays due to interaction 
    with the EBL from EBL models.
    """

    def __init__(self, z, EGeV, tau, kx=1, ky=1, **kwargs):
        """
        Initiate Optical depth model class. 

        Parameters
        ----------
        z: array-like
            source redshift, m-dimensional

        EGeV: array-like
            Energies in GeV, n-dimensional

        tau: array-like
            n x m array with optical depth values

        kx: int
            order of interpolation spline along energy axis, default: 1

        ky: int
            order of interpolation spline along energy axis, default: 1

        kwargs: dict
            Additional kwargs passed to `~scipy.interpolate.RectBivariateSpline`
        """

Here is an example:

import numpy as np
from ebltable.tau_from_model import OptDepth

redshift = 0.2
energy_range = np.geomspace(0.2, 10, 15)  # in TeV
energy_range_GeV = np.geomspace(200, 10000, 15)  # in GeV

dominguez_model =  OptDepth.readmodel(model="dominguez")
dominguez_tau = dominguez_model.opt_depth(redshift, energy_range)
dominguez_tau_GeV = dominguez_model.opt_depth(redshift, energy_range_GeV)

When comparing both dominguez_tau, the one with GeV units is wrong while the one using TeV units is the correct one. But as I said, in the notebook notebooks/example_tau.ipynb it already uses TeV units. The solution would only be changing the variable EGeV for ETeV and changing the comment in line 44.

Cheers

me-manu commented 1 month ago

Dear @R-Grau, Sorry for the prolonged silence. OK, I see your point. For instantiating the class, the code asks for energies in GeV, the redshift and the optical depth (the latter one is a 2D array with the optical depth over a regular grid in GeV and redshift). The reason I choose GeV here is because many tables of models in literature are given in GeV. But if use a build in model, e.g., by calling

 dominguez_model =  OptDepth.readmodel(model="dominguez")

you actually never touch this part of the code. When you calculate the optical depth from the interpolation with

dominguez_tau = dominguez_model.opt_depth(redshift, energy_range)

the energy should be in TeV. This is also stated in the documentation of the function (lines 300 and 309 of tau_from_model.py). Therefore, I don't see the necessity to change the code at this point as I hope the documentation is clear enough and the example demonstrates the usage as well.

I hope you agree and I'm closing this issue for now. But feel free to re-open if something is not clear.