lesgourg / class_public

Public repository of the Cosmic Linear Anisotropy Solving System (master for the most recent version of the standard code; GW_CLASS to include Cosmic Gravitational Wave Background anisotropies; classnet branch for acceleration with neutral networks; ExoCLASS branch for exotic energy injection; class_matter branch for FFTlog)
220 stars 291 forks source link

solver/shooting improvement #484

Open emiliobellini opened 2 years ago

emiliobellini commented 2 years ago

Hi all, I recently found a stupid problem that I solved but probably not in the most elegant way. Then, I am writing only to share the solution and to suggest a future improvement if it is interesting.

I was implementing in Class 3.2 an early dark energy model (below the links for the repositories). This model has features around recombination, i.e. the density of the EDE scalar field has a peak around that epoch. Roughly speaking, the position in redshift of this peak (z_p) is controlled by the mass of the scalar field (m), and sometimes it is useful for the users to pass z_c instead of m. We can use shooting for that. The shooting works well on Class 2.8-ish, but fails miserably on 3.2 :( The reason is that in Class 3.2 too few points are saved in the background table (this is regulated by background_Nloga=3e3 by default). So varying the mass during shooting you always fall at the same redshift z_c. If background_Nloga is increased by one order of magnitude everything works fine, but the code is a bit slower. Just for comparison, in Class 2.8 the number of lines saved in the background table is around 4e4, and the code wasn't slower. If I understood correctly the ndf15 solver, the time steps really calculated are many more than the ones stored in the background_table, and as a result the code is able to correctly capture the features of the model (but not differences due to small variations of some parameter). On top of this solution, a more elegant workaround would be to modify the solver such that the background table is computed on the fly (with smaller time steps close to features) and not from an evenly sampled precomputed log(a) array (which is just fine if there are no additional timescales in the universe).

What do you think? Many thanks! Emilio

Class v2.8 (it works): https://github.com/mwt5345/class_ede Class v3.2 (it works only increasing background_Nloga, already done by default): https://github.com/emiliobellini/class_public/tree/class_ede