21cmfast / 21cmFAST

Official repository for 21cmFAST: a code for generating fast simulations of the cosmological 21cm signal
MIT License
58 stars 38 forks source link

[Feature Req.] update nu_tau_one_approx #122

Open qyx268 opened 4 years ago

qyx268 commented 4 years ago

Is your feature request related to a problem? Please describe.

when calculating the X-ray optical depth, we were calculating nu_tau_one using nu_tau_one_approx -- we were using average x_e and filling_factor_of_HI because we didn't have Ts.c talking to find_HII_bubble.c.

Describe the solution you'd like

Now we can pass IonizedBox to ComputeTsBox and use the cell value instead of globally averaged quantities.

Describe alternatives you've considered

Additional context

BradGreig commented 4 years ago

@qyx268 to use the exact cell value would result in the removal of the interpolation tables and performing the calculation explicitly. This would result in the calculation being much slower.

It can be put in as an option though, which is along similar lines as what I was referring to with #46.

andreimesinger commented 4 years ago

no, the point wasn’t so much the cell’s value, although an inhomogeneous tauX would be nice eventually, but passing the global EoR history. before there was a mismatch between the estimated EoR history in nu_tau_one and that calculated by findHIbubbles. but yuxiang tells me now that find HII bubbles.c and Ts.c are called concurrently for every redshift step. so Ts.c can know the actual at higher redshifts instead of having to guess it. this is actually faster since it doesn’t have to recalculate “collapse fractions” in nutauone.

On 19 Mar 2020, at 03:49, BradGreig notifications@github.com wrote:

 @qyx268 to use the exact cell value would result in the removal of the interpolation tables and performing the calculation explicitly. This would result in the calculation being much slower.

It can be put in as an option though, which is along similar lines as what I was referring to with #46.

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub, or unsubscribe.

qyx268 commented 4 years ago

I thought if we pass IonizedBox to ComputeTsBox, we can directly use its xH_box without any new calculation and can also remove the calculation of filling_factor_of_HI_zp? Then we don't need initialise_Nion_Ts_spline or calculate average turnover masses for Splined_Fcollzp_mean_MINI. It should save a bit time too.

andreimesinger commented 4 years ago

yes i am saying to store the actual filling_factor_of_HI (calculated in findHIIbubbles) and use that in Ts.c, instead of estimating it. this would save time and be more self consistent, and should be trivial to implement.

note that you do not need the actual 3D box for this... it is just the global average neutral fraction since we are assuming a uniform tau_X. one could calculate the spatially dependent tau_X, by storing the tau around each cell based on that ionization field. but this is a more elaborate calculations and would make the code slower.

On 19 Mar 2020, at 13:06, Yuxiang Qin notifications@github.com wrote:

 I thought if we pass IonizedBox to ComputeTsBox, we can directly use its xH_box without any new calculation and can also remove the calculation of filling_factor_of_HI_zp? Then we don't need initialise_Nion_Ts_spline or calculate average turnover masses for Splined_Fcollzp_mean_MINI. It should save a bit time too.

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub, or unsubscribe.

qyx268 commented 4 years ago

Okay, we can stick to use the mean value then. But I just realized that we need to do the integral to get nu_tau_one from zpp to zp, meaning that we need the history of and to do that. While we are ignoring the evolution of using the value at zp in the integrand, I'm not sure if we can do the same thing for . There is a note in the code //simplification to use the <x_e> value at zp and not zhat. should'nt matter much since the evolution in x_e_ave is slower than fcoll. in principle should make an array to store past values of x_e_ave..

If we do need the evolution of , then we need to pass the vs z to the calculation of brightness_temperature and do interpolation every times when we add a new point of vs z. This means that it might not be quicker since we need to do interpolation at every redshift while now we only need to do it once.

P.S. Currently, we are also ignoring the evolution of Mturns (i.e. M_LW and M_cool) when doing the integral, which is probably not a good assumption since they do evolve a bit (even we are ignoring RE feedback when calculating Ts...). If we decide to replace nu_tau_one_approx with nu_tau_one, then we don't have this concern anymore.

BradGreig commented 4 years ago

nu_tau_one needs to know the filling factor at the current redshift, and all previous redshifts. At present, it just guesses both.

zpp can go beyond z_HEAT_MAX, whereas you'll only ever have stored redshifts from z_HEAT_MAX to below if reading in from previous snapshots. Therefore, you'd still have to guess the filling factor for redshifts greater than z_HEAT_MAX. Plus, you'd still have to guess for the current redshift as SpinTemperature is called before IonizeBox.

qyx268 commented 4 years ago

That is correct. We do not have the filling factor before z_HEAT_MAX or after the previous redshift. In most cases, the filling factor before Z_HEAT_MAX can be assumed to be one or the same as at Z_HEAT_MAX

image

As for the range between the previous redshift and the current one, I guess we can keep using Nion_General to estimate the filling factor at the current redshift and feed it into the history we read from IonizeBox. But since now we have the information about the turnover masses from IonizeBox, this estimation should be more accurate (plus the filling factor before the previous redshift is accurate).

qyx268 commented 4 years ago

If adding one point is not sufficient for doing the interpolation, we can keep initialise_Nion_Ts_spline and use the turnover masses read from IonizeBox when using the splined table at redshift lower than the previous redshift. As for the filling factor before the previous redshift, we use the value read from IonizeBox.

What do you think? @BradGreig

andreimesinger commented 4 years ago

yes by construction zheat max was chosen so that the filling factor of HI is unity

On 20 Mar 2020, at 14:09, Yuxiang Qin notifications@github.com wrote:

 That is correct. We do not have the filling factor before z_HEAT_MAX or after the previous redshift. In most cases, the filling factor before Z_HEAT_MAX can be assumed to be one or the same as at Z_HEAT_MAX

As for the range between the previous redshift and the current one, I guess we can keep using Nion_General to estimate the filling factor at the current redshift and feed it into the history we read from IonizeBox. But since now we have the information about the turnover masses from IonizeBox, this estimation should be more accurate (plus the filling factor before the previous redshift is accurate).

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub, or unsubscribe.

BradGreig commented 4 years ago

Hi @qyx268 @andreimesinger, I'll finally respond to this.

I'm not sure how easy this is to implement. At the current redshift, zp, ComputeTsBox only has access to the boxes from the previous redshift. It cannot access things from all subsequent redshifts which would be required to actually perform this calculation.

In which case this information would need to be stored somewhere else and kept around at all steps of the calculation. How this can be done robustly would require some thinking.

andreimesinger commented 4 years ago

i think maybe i am not understanding something: at zp, the opacity from zp and zpp > zp only depends on the previous redshifts. not subsequent ones. in the original code (OC), i added an estimate of nu_tau_one based on an analytic collapse fraction estimate (zeta fcoll) for the neutral fraction xHI(z>zp), because Ts.c was called before* all of the find_HII_bubbles.c at the higher redshifts. but as i understand it, in the new code, Ts and find_HII_bubbles are computed concurrently for each zp. if this is indeed the case, you already have xHI(z > zp) computed in previous calls. so it should be straightforward to save that actual EoR history and use it (interpolated) in nu_tau_one, instead of the fcoll approximation.

On 09.10.2020., at 02:58, BradGreig notifications@github.com wrote:

Hi @qyx268 https://github.com/qyx268 @andreimesinger https://github.com/andreimesinger, I'll finally respond to this.

I'm not sure how easy this is to implement. At the current redshift, zp, ComputeTsBox only has access to the boxes from the previous redshift. It cannot access things from all subsequent redshifts which would be required to actually perform this calculation.

In which case this information would need to be stored somewhere else and kept around at all steps of the calculation. How this can be done robustly would require some thinking.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/21cmfast/21cmFAST/issues/122#issuecomment-705904657, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADH2UAS6JGH3GJUKGQ4DU3DSJZN4DANCNFSM4LO6LA3Q.