Closed EiffL closed 3 years ago
And just to connect the dots, for now we will be using the old fastpm background.py file for everything connected to cosmology, but with Denise we will be working in parallel on a pure TF implementation of these things in #46 , in principle we'll be able to just use this new TF code as a drop-in replacement.
If there are additional cosmological computations required for the lensing part, that are not currently available in background.py
do let us know :-)
So I went through your implementation and I think I understand what you did, I should be able to finish the implementation. Thanks Max :-) But this made me think of another point, and I have a question for you and @VMBoehm
Ultimately... we want to be able to estimate initial conditions and do parameter inference from the observed lensing lightcone.... If we replicate the box, when we fit it to data, a given part of the volume gets constrained multitple times and from multiple orientations, so that's no good. If we want to fit a lightcone to data, we need to simulate the full lightcone, without box replication, no?
I have a trivial way and a complicated way to do this in FlowPM:
the trivial way first: because we generate batches of cubes at once, we actually can generate at once all of the independent boxes needed to tile the lightcone, and output a kappa lightcone from one batch of sims. And actually, one batch of sims can generate a bunch of "indepedent" lightcones if we shuffle and flip them. I don't see a drawback here, except that the cubes only needed at high redfhifts get unnecessarily evolved to z=0, and that we have sharp disconnected transitions at the interface between cubes.
We run a "pencil beam" simulation, much more extended along the line of sight direction than the other directions. We can even progressivly slice it and discard the high radial distance parts of the volume as the simulation runs. I think this should work just fine as well, but I wonder if there cannot be some weird effect due to having a lot more modes in one direction than the other.
What do you guys think? Is removing box replication a pipe dream?
like this :-)
So I've thought about it a bit more, and leaning towards rectangular boxes.
Another point, I realized that MADLenS is following the Born approximation, right?
I'm wondering.... what's the cost/benefice analysis there, I'm looking at Andea's paper: https://arxiv.org/pdf/1612.00852.pdf and doing full ray-tracing doesn't seem to be many orders of magnitudes more costly, even if a tiny bit more work is needed, I agree. Probably far less expensive than the cost of running the actual N-body though....
In all cases, looks like we will need to use lens planes, so we need a lightcone
function that exports lensplanes from the correct location in the simulation volume at the correct time. These can be simply weighted together to obtain convergence maps with the Born approximation, and time permitting, we can also use them to implement some full ray tracing.
Also.......... genuine question here.... what's wrong with forward ray tracing? the light rays are not super deflected in general so if we shoot them from high redshift towards earth, they shouldn't fall very far...or am I too naive?
I gave it a try and implemented a very naive lensing lightcone from a rectangular simulation over there: https://github.com/modichirag/flowpm/blob/lensing/notebooks/TestLightcone.ipynb
What I'm doing is running a simulation in a 64x64x640 box (physical size 200x200x2000 Mpc/h), which should more or less correspond to a 5x5 deg^2 field up to z=0.8. I first run the whole volume until reaching the scale factor of the highest redshift lens plane (in this case around z=0.8), then I run the simulation by steps corresponding to a given \delta \Chi and project slices of the simulation onto 2d density planes. And finally I discard the part of the volume that will not be necessary anymore in the lightcone.
So looks like this: ^^^^^ This is the whole simulation at z = 0.8 when the start traveling through the lightcone
And this is the final simulation at z=0: here we see that most of the particles have been removed, so it's much faster.
And here are equal comoving size slices in the volume, exported at the corresponding scale factor:
Things that remain to be implemented is the actual ray tracing from these lens planes. In the simplest case, it's just a matter of weighting these planes according the lensing efficiency.
Quick question though, do you do something special in MADLens to have a smooth lens plane at low redshift? Or it's just a matter of having enough resolution?
@dlanzieri since I know you have started looking at this issue, here is a little bit of litterature that you may find useful:
Hi @EiffL!
Sorry for being MIA for the week. It is grad application season and I am racing to get all of my statements (and the MADLens paper) finished... We are sooooo close. I do not know a lot about ray tracing, so I will read those papers that you linked as well! In terms of implementing rectangular boxes, I am a little confused about the cost/benefit and would like to understand that better. Maybe when we chat in our talk next week you could talk me through it?
Hi Max, of course! no problem :-) let's chat about it next week, I can tell you more about why I'm curious about rectangular boxes ;-)
On this front, @dlanzieri has opened a draft PR #60 to add implementations in TensorFlow for cosmological distances computations needed for implementing raytracing. WIP but it should be quite useful :-)
@dlanzieri could you share here a few plots to illustrate our current results?
I ran the N–body simulations with a box size of 200x200x2000 Mpc/h and 128x128x1280 number of particles, corresponding to a 5x5 deg^2, field up to more or less z=0.8 . I followed the same pipeline explained by @EiffL few messages earlier. After creating the lens plans, I implemented ray tracing from them and computed the Born–approximated convergence. The figure here shows the convergence profile along with the Born approximation Then I focused on the κ power spectrum, the following panel shows the theoretical power spectrum and the one obtained with the Born approximation. I have recently started working on peak statistics, here the first peak histogram I got :
This issue is to track the implementation of weak lensing raytracing within FlowPM. Our goal here is to add to the single GPU implementation of FlowPM the tools necessary to output a convergence lightcone. As I understand it, here are the rough steps that would be required:
In addition, we will want to implement the following tests:
I have created a
lensing
branchhttps://github.com/modichirag/flowpm/tree/lensing
where we can start working on that implementation. Concretely, what I think we will need to do is to modify thenbody
function here: https://github.com/modichirag/flowpm/blob/b79ecf1f6aa445ac73aa82d9f4b4cb8df16e0a9d/flowpm/tfpm.py#L311 to support outputting additional fields than just the final states.Because it's the first time that I try to implement weak lensing ray tracing I may have a way too simplified understanding of the steps and tests that will be required. @VMBoehm @Maxelee feel free to comment on this issue if there are particularly tricky things that we should be on the lookout for.
And I think for now we don't want to worry about PGD or distributed computation, we will take care of these things afterwards.