LSSTDESC / 2pt_validation

Repo to track progress on 2PT validation (LSS 1.1.X tasks)
4 stars 1 forks source link

Add shear to CoLoRe #16

Open slosar opened 8 years ago

slosar commented 8 years ago

@damonge @dkirkby Just putting this here, before I forget. At some point Seljak was really trying to push unifying the full g-g,g-s and s-s testing through lognormal mocks and his group is making one of those superdooper codes. So, having shear maps would be fantastic in CoLoRe. It is not urgent, maybe on a 4 month timescales. The way it might work best would be to output interpolated kappa healpix maps as a function of redshift, which can then be turned into shear by appropriate alm transforms in mkcat/fastcat.

damonge commented 8 years ago

I have talked to @fjaviersanchez about doing this, and we were planning on implementing it during his visit to Oxford for the collaboration meeting. As an aside, my big plan is to merge CoLoRe with CRIME (intensity mapping) to have a multi-probe simulator. In this case it could use CRIME's SHT utilities to do all the alm transforms on the fly.

slosar commented 7 years ago

Just adding quote from @damonge , who seems to have done most of it.

In case you want to know more, I've written a few more details about the new implementation below (quite a few things have changed). After fixing a couple of bugs I'm a bit more comfortable with the newest version (still in the w_pixelization branch), but many things remain to be checked (e.g. power spectra). It'd be great to get help in this, since I'll be a bit busy in the coming days with applications. I'll contact Javier in case he wants to play around with the code a bit, but any help would be great.

I'm quite excited about the way the code is structured right now. It should now be quite easy to include both IM and CMB lensing into it and have the "super-mega-code" I had in mind. I'm actually thinking about making it public with an e-print or something like that explaining how it works once it's complete.

Anyway, more details below and feedback welcome.

Regards,

David

Details about the changes: 1- I've changed the way the code works in such a way that, once the fields (delta and phi) are generated in the Cartesian grid, they are immediately interpolated into a set of onion shells, each divided into angular pixels to form spherical voxels. The sizes of the voxels are chosen to yield a volume similar or smaller than that of the cartesian cells in order to avoid extra smoothing. Thus the radial width of each shell is constant, but the pixel resolution grows with radius. 2- Note that each node will only keep those voxels that are affected by the Cartesian cells it holds, in order to avoid using too much memory. In order to afterwards do the integral along the line of sight, the code then does an all-to-all communication to reorganize the voxels such that each node ends up storing a "pyramid" of pixels that covers the whole r-range, but only a particular "square" of the sky. This is done by "slices2beams" in pixelization.c. 3- The Hessian of the lensing potential is then computing by integrating along the line of sight in each of these pyramids (of course, with the corresponding lensing kernel). 4- The galaxies are sampled from the density field in the voxels (instead of the Cartesian cells, which have already been freed anyway), and ellipticities and RSD displacements are assigned based on the local values of the velocity field and the derivatives lensing potential.