Closed EiffL closed 3 years ago
I've had a little time to look at this lenstools problem ^^' after struggling a bit, I figured out the correct way to export our lensplanes and made a small notebook here: https://github.com/modichirag/flowpm/blob/lensing/notebooks/dev/test_lenstools.ipynb
But, in doing so, I realized that most of the lensing efficiency kernel computation needed to turn simulation volumes to lens planes happens outside of the raytracing functions in lenstools. This is still a pretty cool check to to, especially when we will look at other ray tracing strategy, but we'll need another test that starts from the particle cube itself to validate the rest of the lensing efficiency computation.
After a little bit of exploration, got a mostly working direct comparison between LensTools and our own raytracing:
Added a notebook with temporary demo code here: https://github.com/DifferentiableUniverseInitiative/flowpm/blob/bf6d45309fc6c316907e7cf28ca5c20c5ae1d16e/notebooks/dev/lenstools_lightcone_test-adapted.ipynb
It's looking mostly comparable, LensTools applies a smoothing to the lensplanes so I kind of expect the map to be smoother, but I'm not sure the projection matches 100%. Must investigate more....
That's exactly right. so, this should test the entire ray tracing procedure. The results don't match 100% because there is a fundamental difference in how we build the lens planes compared to lenstools. LensTools (and as far as I can see most/all ray tracing codes) build the lensplanes in comoving coordinates, then interpolate the lensplanes on the angular grid on the sky during the ray tracing. In what we do, which is inspired from MADLens, we instead directly build the lensplanes in the angular grid on the sky.
Where it differs, is that the lenstools procedure imposes a maximal resolution in comoving scales, whereas our approach imposes a resolution in angular scales. And that makes a difference since different comoving scales correspond to different angular scales at different redshifts. So for instance, here is how the same low redshift lensplanes appear on the sky under the 2 procedures:
also, there seems to be a small difference in projection, still trying to understand where it comes from.
I've thought about it..... and I think it may be safer to follow the LensTools approach instead of what we are doing...
@EiffL ok, I see. So probably in this case creating a test code makes no sense because the difference in how we build the lens planes will make test fail. So we can implement a new lightcone (or at least changing part of it) following the LensTools approach.
Ok, I'm running out of time for now and didn't get a chance to write a full function :'-( , but I added a demo notebook that computes lensplanes like lenstools: https://github.com/DifferentiableUniverseInitiative/flowpm/blob/lensing/notebooks/dev/lenstools_lightcone_test-adapted-v2.ipynb The lightcone function there exports density planes in comoving coordinates, similar to what this LensTools function does: https://github.com/apetri/LensTools/blob/9151988bfe6fbd6809353a33cfb556d44b6806ed/lenstools/simulations/nbody.py#L522
Then I compare the density planes to the ones lentools computes (left to right, lenstools, flowpm, difference) and it looks pretty good. It's not 100% the same because we use CIC painting and lenstools does nearest neighbor painting apparently, or something like that. But close enough that we can compare the 2, with an appropriate margin for error.
Then at the end of the notebook, I show how to use the tfa.image.interpolate_bilinear
function of tensorflow-addons to compute the projected field on the sky.
What remain to do is simply to write the born approximation function equivalent to
https://github.com/apetri/LensTools/blob/9151988bfe6fbd6809353a33cfb556d44b6806ed/lenstools/simulations/raytracing.py#L1332
@EiffL ok, thank you! Maybe I need a bit of time to get a handle on this, especially if I want to write the Born Approximation.
Sorry it took me a few hours to document what I did >.< I made some edits that are messing up the notebook, so instead of pushing my new version of it, I'll share here the steps that I followed so that you can reproduce them. There are essentially 2 things:
In any case, I see this pattern in the residual image disappear if I do the following modification in the customized FastPMSnapshot.getPositions()
function included in the notebook:
#Read in positions in Mpc/h
if (first is None) or (last is None):
positions = (data["0/Position"][:] + 0.5/64*200 )*self.Mpc_over_h
else:
positions = data["0/Position"][first:last]*self.Mpc_over_h
See I added a shift of 0.5/64*200
which corresponds to a distance of half a voxel, converted to Mpc/h. So we should just make that shift a little bit more generic, i.e. using the values included in self.header
but otherwise I think this is as good a place as any to have this.
from scipy.ndimage import gaussian_filter
# And for instance to get the smoothed lensplane
smoot_ps = gaussian_filter(ps[ind],1)
and I get this: which looks very good :-)
didn't want to overcrowd our dev folder with notebooks, so I put the notebook I used for the plots above in a separate gist :-) https://gist.github.com/EiffL/65fc557ab487a9eba4d4abfcb9f1f266 Just in case it might be useful to look at the whole thing to understand what I did
Let me make a TODO list:
test_raytracing.py
, with inside the FastPM class to reload snapshots@dlanzieri so with this last update, I think we are almost all good to merge this branch ;-) We would just like to see now if we can make a simulation with a reasonable lensing power spectrum. We just may not be able to exactly validate it against theory, but I guess that's life.
Hi @dlanzieri, I tried something that we talked about today to reduce how much memory we need, and make a lightcone reusing the simulation volume several times. Have a look here :-) : https://github.com/DifferentiableUniverseInitiative/flowpm/blob/lensing/notebooks/dev/test_lensing_random.ipynb
And I can make pretty nice convergence maps:
But there is something wonky going on when I compare the power spectrum to the theory.... it doesnt match for all source redshifts, not sure why.... It works for some source redshifts though: Might be a problem with the fact that we only have coarse lens planes here.... not sure
Hi @EiffL ! I see, I tried to get the same plot and also to plot the power spectrum for different source redshifts. Actually, the power spectrum obtained from the convergence map seems unlikely to change with different source redshifts.
Interesting! Hummmm.... Can you plot the flowpm power spectra for all source redshifts ? I wonder what it looks like for smaller redshifts
ok, scratch what I just said, the Power spectrum changes with different source redshifts
@dlanzieri I think we are in good shape with this PR :-)
There is one last thing that we could try to do before merging.... based on our latest results, it looks like an approach that reuses the simulation volume several times with random shifts/rotations is a good idea as it would save a lot of memory. So, we could add the tools needed for that to raytracing.py
.
Do you want to take a look at the code that @Maxelee already added here: https://github.com/DifferentiableUniverseInitiative/flowpm/blob/d8d9036854cefc7b1cbff7a2ca739eae91732eb4/flowpm/experimental/lenstf.py#L8 and here: https://github.com/DifferentiableUniverseInitiative/flowpm/blob/d8d9036854cefc7b1cbff7a2ca739eae91732eb4/flowpm/experimental/lenstf.py#L72
and see if we can reuse pieces of this code to do a smarter thing than what I hacked here:
plane = tf.roll(plane, shift=np.random.randint(0,256), axis=1)
plane = tf.roll(plane, shift=np.random.randint(0,256), axis=2)
plane = tf.expand_dims(plane, axis=-1)
plane = tf.image.random_flip_left_right(plane)
plane = tf.image.random_flip_up_down(plane)
Do you want to take a stab at it?
Hi @EiffL ! Of course, Let's do it :)
Great :-) So, you can try to hack a prototype first, but ultimately, I think we want to modify the density_plane
function
https://github.com/DifferentiableUniverseInitiative/flowpm/blob/d8d9036854cefc7b1cbff7a2ca739eae91732eb4/flowpm/raytracing.py#L15
to add an option to apply a given rotation to the density cube, and a given shift, when cuting the density plane.
Ok, I've done a full Review of the code and fixed a couple of last issue. The lensing notebook now can produce these nice plots: for a cube of 128x128x128 particles, on a 5x5 deg field, and we kind of agreed it was not completely crazy for a low resolution simulattion. In any case all of our tests against lenstools seem to pass, so I think we are all good to go to merge this branch \o/
@dlanzieri since I opened the PR I can't write an approval review myself :-) Can you do a last review and see if it looks all good to merge?
also, note that in order to have a clean notebooks
folder that doesn't scare away users ^^' I have moved any development notebook to the notebooks/dev
folder.
Fantastic! Thank you @dlanzieri ! Merging \o/
Thanks to fantastic work by @dlanzieri we are in the position of opening this PR which aims to solve #45 .It currently implements a tool for computing lightcones and exporting lens planes, as well as a Born approximation of ray tracing.
Here are the goals before turning it into a full Pull Request: