LSSTDESC / DEHSC_LSS

Scripts and workflow relevant to HSC data
1 stars 3 forks source link

Systematics maps: #3

Closed damonge closed 6 years ago

damonge commented 6 years ago

We'd like to create maps of the following:

I'll open another issue for other quantities (e.g. airmass) that are probably stored in the frames table. @fjaviersanchez @humnaawan @cavestruz

humnaawan commented 6 years ago

Ok so just to summarize some things related to the to-dos:

Depth

HSC_5sigma-depth.ipynb creates the 5sigma depth maps using the three methods listed above.

For example: screen shot 2017-12-06 at 11 27 21 pm

Power_spectra.ipynb calculates the galaxy power spectrum using two limiting magnitudes based on Javi's depth calculation method.

HSC_5sigma-depth-twoPtCorr.ipynb calculates the two pt. correlation function w(theta) using depth maps from the three methods.

Xcountinputs, Dust extinction, Star map

HSC-GAMA15H-forced+random-maps.ipynb has the maps for these quantities. If they look ok, the code can be run at a higher resolution and the maps can be saved.

Bright object mask

HSC-GAMA15H-forced+random-maps.ipynb looks at iflags_pixel_bright_object_center and iflags_pixel_bright_object_any. These columns are present in both forced and random tables, and the maps from the two tables are not the same, e.g. we have the cumulative distributions from the two tables: screen shot 2017-12-06 at 11 28 15 pm screen shot 2017-12-06 at 11 28 27 pm

It is unclear why the distributions are different. Thoughts?

rmandelb commented 6 years ago

@humnaawan - looks like you are making great progress! Just a few questions and comments:

  1. Where can I find a description of the random sky-std and other methods?

  2. I think your last plot is actually a sign of a faulty test, and not a sign that "the maps from the two tables are not the same". The maps from the two tables should cover the exact same areas of the sky. But you aren't testing that - you're testing whether the same fraction of objects is masked (if I've understood the histogram correctly). For that to be true, you'd need the masks to cover the same areas, and you'd need the fraction of real and random points near bright stars to be the same. We know that systematic errors associated with object detection and characterization near bright stars lead to a violation of the second condition (that's why we want to mask these regions). So your plot is showing signatures of the second issue, which is not the one you are trying to test.

damonge commented 6 years ago

@rmandelb : we're looking at three different methods

  1. Magnitude limit is defined by the average of 5*flux_err for all galaxies in each pixel (and then transformed to magnitude).
  2. "Javi" is Javi's method: we make a histograms of the S/N in bins of magnitude for all galaxies in a given pixel and define the magnitude limit as the magnitude of the histogram whose median S/N is ~5.
  3. Using the random table as described in https://hsc-release.mtk.nao.ac.jp/doc/index.php/random-points-for-dr1/

I found # 2 a confusing. After reading section 5.6 of the DR1 paper I see that the definition you used is a simpler version of this (basically, the mean magnitude of all galaxies in each pixel with 4<S/N<6). We should check this one too.

humnaawan commented 6 years ago

@rmandelb yes, you're right! I was focusing on the overall geometry of the mask, and not the individual points. I will try to see how we can test the fraction of real vs. random points near bright stars, and see if we can have the same bright object mask for the galaxies and the randoms.

@damonge I updated the analysis and added the use of the depth definition from DR1 paper as well. Comparing the 5sigma depth from these methods, we have: screen shot 2017-12-08 at 1 35 31 am

All methods aside from the random-sky-std look comparable, just as we see in the 2pt correlations (with limitingMag= {'Javis': 26.0, 'randomSkyStd-isPrimary': 25.0, 'FluxErr': 25.8, 'dr1paper': 25.8}):

screen shot 2017-12-08 at 1 30 53 am

I also created maps for 10sigma depth (now ignoring random-sky-std and looking at the other three methods of calculations only): screen shot 2017-12-08 at 1 31 06 am

We can then look at the 2pt correlations based on these maps: (with limitingMag= {'Javis': 25.0, 'FluxErr': 25.0, 'dr1paper': 25.0})

screen shot 2017-12-08 at 1 31 18 am

More details can be found in the HSC_10sigma-depth-twoPtCorr.ipynb notebook but the three methods are essentially the same in terms of 5sigma depth and 2pt corr. They do differ a bit in depth std though although I am unsure how important that difference is: screen shot 2017-12-08 at 1 40 14 am

damonge commented 6 years ago

@humnaawan @fjaviersanchez this is really great! OK, let's use a magnitude cut of i=25, and a footprint defined by this cut on the 10-sigma depth map generated using the fluxerr method. This is probably conservative, but not necessarily too much, since photo-z are bound to be horrible at these magnitudes.

I've opened a new issue #11 to generate a standard script that takes the data, applies these cuts and generates the masks (including bright object mask, which I have removed from the list of tasks here). We should probably now concentrate on generating the systematics maps.

damonge commented 6 years ago

Current systematics mapping pipeline documented in README