DifferentiableUniverseInitiative / flowpm

Particle Mesh simulation in TensorFlow
https://flowpm.readthedocs.io/
MIT License
90 stars 20 forks source link

Add weak lensing convergence output #45

Closed EiffL closed 3 years ago

EiffL commented 3 years ago

This issue is to track the implementation of weak lensing raytracing within FlowPM. Our goal here is to add to the single GPU implementation of FlowPM the tools necessary to output a convergence lightcone. As I understand it, here are the rough steps that would be required:

In addition, we will want to implement the following tests:

I have created a lensing branch https://github.com/modichirag/flowpm/tree/lensing where we can start working on that implementation. Concretely, what I think we will need to do is to modify the nbody function here: https://github.com/modichirag/flowpm/blob/b79ecf1f6aa445ac73aa82d9f4b4cb8df16e0a9d/flowpm/tfpm.py#L311 to support outputting additional fields than just the final states.

Because it's the first time that I try to implement weak lensing ray tracing I may have a way too simplified understanding of the steps and tests that will be required. @VMBoehm @Maxelee feel free to comment on this issue if there are particularly tricky things that we should be on the lookout for.

And I think for now we don't want to worry about PGD or distributed computation, we will take care of these things afterwards.

EiffL commented 3 years ago

And just to connect the dots, for now we will be using the old fastpm background.py file for everything connected to cosmology, but with Denise we will be working in parallel on a pure TF implementation of these things in #46 , in principle we'll be able to just use this new TF code as a drop-in replacement.

If there are additional cosmological computations required for the lensing part, that are not currently available in background.py do let us know :-)

EiffL commented 3 years ago

So I went through your implementation and I think I understand what you did, I should be able to finish the implementation. Thanks Max :-) But this made me think of another point, and I have a question for you and @VMBoehm

Ultimately... we want to be able to estimate initial conditions and do parameter inference from the observed lensing lightcone.... If we replicate the box, when we fit it to data, a given part of the volume gets constrained multitple times and from multiple orientations, so that's no good. If we want to fit a lightcone to data, we need to simulate the full lightcone, without box replication, no?

I have a trivial way and a complicated way to do this in FlowPM:

What do you guys think? Is removing box replication a pipe dream?

EiffL commented 3 years ago

like this :-) image

EiffL commented 3 years ago

So I've thought about it a bit more, and leaning towards rectangular boxes.

Another point, I realized that MADLenS is following the Born approximation, right?

I'm wondering.... what's the cost/benefice analysis there, I'm looking at Andea's paper: https://arxiv.org/pdf/1612.00852.pdf and doing full ray-tracing doesn't seem to be many orders of magnitudes more costly, even if a tiny bit more work is needed, I agree. Probably far less expensive than the cost of running the actual N-body though....

In all cases, looks like we will need to use lens planes, so we need a lightcone function that exports lensplanes from the correct location in the simulation volume at the correct time. These can be simply weighted together to obtain convergence maps with the Born approximation, and time permitting, we can also use them to implement some full ray tracing.

EiffL commented 3 years ago

Also.......... genuine question here.... what's wrong with forward ray tracing? the light rays are not super deflected in general so if we shoot them from high redshift towards earth, they shouldn't fall very far...or am I too naive?

EiffL commented 3 years ago

I gave it a try and implemented a very naive lensing lightcone from a rectangular simulation over there: https://github.com/modichirag/flowpm/blob/lensing/notebooks/TestLightcone.ipynb

What I'm doing is running a simulation in a 64x64x640 box (physical size 200x200x2000 Mpc/h), which should more or less correspond to a 5x5 deg^2 field up to z=0.8. I first run the whole volume until reaching the scale factor of the highest redshift lens plane (in this case around z=0.8), then I run the simulation by steps corresponding to a given \delta \Chi and project slices of the simulation onto 2d density planes. And finally I discard the part of the volume that will not be necessary anymore in the lightcone.

So looks like this: image ^^^^^ This is the whole simulation at z = 0.8 when the start traveling through the lightcone

And this is the final simulation at z=0: image here we see that most of the particles have been removed, so it's much faster.

And here are equal comoving size slices in the volume, exported at the corresponding scale factor: image

Things that remain to be implemented is the actual ray tracing from these lens planes. In the simplest case, it's just a matter of weighting these planes according the lensing efficiency.

Quick question though, do you do something special in MADLens to have a smooth lens plane at low redshift? Or it's just a matter of having enough resolution?

EiffL commented 3 years ago

@dlanzieri since I know you have started looking at this issue, here is a little bit of litterature that you may find useful:

Maxelee commented 3 years ago

Hi @EiffL!

Sorry for being MIA for the week. It is grad application season and I am racing to get all of my statements (and the MADLens paper) finished... We are sooooo close. I do not know a lot about ray tracing, so I will read those papers that you linked as well! In terms of implementing rectangular boxes, I am a little confused about the cost/benefit and would like to understand that better. Maybe when we chat in our talk next week you could talk me through it?

EiffL commented 3 years ago

Hi Max, of course! no problem :-) let's chat about it next week, I can tell you more about why I'm curious about rectangular boxes ;-)

EiffL commented 3 years ago

On this front, @dlanzieri has opened a draft PR #60 to add implementations in TensorFlow for cosmological distances computations needed for implementing raytracing. WIP but it should be quite useful :-)

EiffL commented 3 years ago

@dlanzieri could you share here a few plots to illustrate our current results?

dlanzieri commented 3 years ago

I ran the N–body simulations with a box size of 200x200x2000 Mpc/h and 128x128x1280 number of particles, corresponding to a 5x5 deg^2, field up to more or less z=0.8 . I followed the same pipeline explained by @EiffL few messages earlier. After creating the lens plans, I implemented ray tracing from them and computed the Born–approximated convergence. The figure here shows the convergence profile along with the Born approximation kmap_denise Then I focused on the κ power spectrum, the following panel shows the theoretical power spectrum and the one obtained with the Born approximation. power_con128 I have recently started working on peak statistics, here the first peak histogram I got : kmap_peak