TRASAL / frbpoppy

Fast Radio Burst Population Synthesis in Python
MIT License
28 stars 11 forks source link

Code speedup options #42

Open telegraphic opened 3 years ago

telegraphic commented 3 years ago

Hi David,

I've been using your code to estimate how many FRBs we could detect with a new survey -- thank you very much for making this available, it is fantastic! (I will be in touch about the science via email once it's a bit better fleshed out).

I had some ideas for speeding up the code, which are in this notebook.

The first is to speed up gen_dm() in CosmicPopulation by using healpy to do the coordinate lookup, instead of sqlite (there is a fair bit of overhead with making multiple sql queries). I just uploaded a precomputed NE2001 and YMW16 healpix maps to PyGEDM, which is essentially equivalent to the sqlite table frbpoppy populates.

The second is to speed up gen_dist() by using interp1d from scipy, to generate interpolation functions that can be used instead of the DistanceTable lookup.

Combined, these should significantly speed up the time taken for CosmicPopulation.generate().

As you'll see in the notebook, the NE2001 models have some disagreement. I'm not sure if this is due to rounding or something else?

0xCoto commented 3 years ago

@telegraphic - Hi Danny,

I've been using your code to estimate how many FRBs we could detect with a new survey

Have you got the code for your detection rate estimation available somewhere? I'd be interested to take a look at how exactly you're going about it, the assumptions you've made about the input FRB population and whether you're comparing the simulated detection rate of another telescope with your survey's rate (and then looking at the actual numbers provided by real observations to make a prediction).

I can see that frbpoppy has already got an example script for computing detection rates, but some values used are a bit unclear to me (e.g. does 1e4 have a good reference, and does CosmicPopulation.simple take varying spectral burst properties etc. into consideration to give accurate detection rates?).

Do you simply use this script and add your own survey properties to surveys.csv, or are you computing anything more than that?

Thanks a lot in advance!

telegraphic commented 3 years ago

Hi @0xCoto,

@davidgardenier is of course the expert, and I was intending to get his input when we get more serious about simulations, so please consider below to be of dubious quality:

Here's a gist, current quite basic:

# And setup the survey (this reads from the CSV file, I added a line)
survey = Survey('eda')

# Now generate FRB population
n_source = 5e6
n_days   = 1.0
cosmic_pop = CosmicPopulation.complex(n_source, n_days=n_days)
cosmic_pop.generate()

Output:

cosmic_pop.py | Generating complex population

Then we see what of the cosmic population our survey can see:

# Calculate what we can see
survey_pop = SurveyPopulation(cosmic_pop, survey)

# Print stats
print(survey_pop.source_rate)

for DM in (100, 500, 1000, 2000):
    print("DM smearing  @ %i pc/cm3 [ms]: %2.3f" % (DM, survey.calc_dm_smear(DM)))

print("Fluence Limit [Jy ms]:", survey.calc_fluence_limit(10e-3))

Output:

survey_pop.py | Surveying complex with eda
rates.py      | complex_eda                Days    Sources          %
rates.py      | -----------------------------------------------------
rates.py      | Cosmic Population           1.0    5000000        100
rates.py      | Too late                    1.0    158.556     0.0032
rates.py      | Outside survey              1.0 3545558.9567    70.9112
rates.py      | Outside pointings           1.0          0        0.0
rates.py      | Too faint                   1.0 1453848.9166     29.077
rates.py      | Detected                    1.0   433.5707     0.0087
rates.py      | /Gpc^3                   365.25      83.47          -
rates.py      | Expected                 0.0023          1          -
rates.py      | -----------------------------------------------------
rates.py      | 
DM smearing  @ 100 pc/cm3 [ms]: 0.307
DM smearing  @ 500 pc/cm3 [ms]: 1.537
DM smearing  @ 1000 pc/cm3 [ms]: 3.073
DM smearing  @ 2000 pc/cm3 [ms]: 6.146
Fluence Limit [Jy ms]: 1.6970562748477138

For plotting I'm doing things like:

    z  = survey_pop.frbs.z
    n_survey, b_survey = np.histogram(z, bins=20, range=(0, 1))
    plt.plot ...

(or using the in-built web app plotter, which is pretty awesome!)

Not sure if it is the best way to go, but you can edit params of existing surveys before generating populations:

survey = Survey('utmost')
utmost2d_params = d[d['survey'] == 'utmost'].squeeze()
#print(utmost2d_params)
survey.bw = 45.0
survey.bw_chan = 100.0/1024
survey.T_rec = 70.0
survey.beam_size_at_fwhm = 26.0
survey.beam_size = 26.0
survey.gain = 0.21

My understanding is that the code does not directly give you an event rate in FRBs/sky/day, but just tells you what fraction of the simulated population of FRBs is detected. IYou then need to scale this by simulating an existing survey with well-pinned down event rate, e.g. Parkes HTRU;;

survey = Survey('parkes-htru')
survey_pop = SurveyPopulation(cosmic_pop, survey)

As for the surveys, @davidgardenier 's paper has a table comparing them. To me (please correct if I've misunderstood!), there are three models, where 'simple' is a toy model and 'complex' and 'optimal' boil down to two hypotheses:

1) (non-repeating) FRBs are caused by events which are not like giant bursts from magnetars. We simulate many different spectral indexes and make all burst energies equally probable (between 10^40--45).

2) FRBs are caused by magnetar giant bursts. We assume a spectral index of -0.4 is obeyed like the GC magnetar, and that a power law of burst energies is followed (there are many more faint bursts than there are bright bursts).

From my simulations at low frequency these have drastic implications for the source counts: the complex model predicts far more detectable FRBs than the optimal model.

0xCoto commented 3 years ago

Thank you very much for your input, I'll definitely keep everything you mentioned in mind!