b-remy / gems

GEnerative Morphology for Shear
MIT License
0 stars 0 forks source link

Metacal script #4

Open b-remy opened 2 years ago

b-remy commented 2 years ago

So far the Metacal result with a constant shear on 1024 images is the following

1024/1024 100%

sigma_n: 5e-06
Metacalibration
----------------------
S/N: 31081.3
R11: 0.00769911
shear [0.05852934 0.02753376]
shear true [0.05, 0.0]
m: 0.170587 +/- 0.383094 (99.7% conf)
c: 0.0275338 +/- 0.0409228 (99.7% conf)

I remark that even with a high SNR the residual bias is very high, compared to the ngmix metacal example with a similar SNR. @EiffL do you have any idea what could be wrong?

EiffL commented 2 years ago

I added a comment, I'm very suspicious of this R11 value: R11: 0.00769911

it would mean that your have almost no response to shear

b-remy commented 2 years ago

Indeed, thanks! Adding the pixel scale through the psf and stamps jacobians changed the R11. It seems also to reduce the multiplicative bias, but not that much:

sigma_n: 5e-06
Metacalibration
----------------------
S/N: 59759.9
R11: 1.22816
shear [0.04433078 0.00523172]
shear true [0.05, 0.0]
m: -0.113384 +/- 0.341918 (99.7% conf)
c: 0.00523172 +/- 0.0171501 (99.7% conf)

I'm still investigating. I have also to understand also how to set the weight_fwhm for the ngmix GaussMom fitter.

EiffL commented 2 years ago

So the weight_fwhm defines the size of the gaussian window used to measure weighted moments. It is expressed in arcsecs. Based on you pixel scale and roughly on the size you expect for your galaxies, you can figure out what to use.

The window should be almost zero at the edges of the stamps

b-remy commented 2 years ago

Actually the comparison to the ngmix metacal example is not completely fair, since the latter assume fixed size Exponential profiles with no intrinsic ellipticity.

So I made a new toy model (toymodel0.py) to have the exact same data as in ngmix.examples.metacal.metacal.py, i.e. assuming:

The metacal0.py script reproduces exactly ngmix.examples.metacal.metacal.py, except that the galaxy images are created from our simulations.

Both scripts yield different results, m: 0.100325 vs m: 0.000185307

If you want to reproduce these results you can:

  1. create simulations python toymodel0.py --save=True --plot=False --N=32
  2. run metacal with our sims python metacal0.py --filename=data/sims_toymodel0_0.fits
  3. run metacal with galsim sims ngmix.examples.metacal.metacal.py (with no shift, i.e. dx = dy = 0.)

So assuming that I haven't forgotten anything, the metacalibration is the same. Only the codes to create images differ.

Maybe the problem is coming from GalFlow?

In GalFlow we have tests on light profile generation. The two things I am currently suspecting are:

In order to check [1], I made another script ngmix/metacal.py, which also does the same thing as ngmix.examples.metacal.metacal.py, expect that we can choose to perform the PSF convolution with GalFlow (--galflowconv=1) or galsim (--galflowconv=0).

So the way we perform the convolution also seems to be a source of error.

EiffL commented 2 years ago

hmmmm, ok, yeah I think it's a problem of postage stamp size, we know that the convolution/shear only works for odd size images in autometacal. This is related to https://github.com/DifferentiableUniverseInitiative/GalFlow/issues/8

I'm not sure the psf convolution using galflow.convolve here is working correctly

EiffL commented 2 years ago

So, I think the thing that doesnt work is when the input image is in the pixels domain. The kconvolve function should work as long as both image and PSF are already provided in kspace.

EiffL commented 2 years ago

I've proposed a fix to the ngmix/metacal.py example, to do convolutions in kspace, and using odd number of pixels to avoid half pixel shifts compared to galsim.

You can check this commit for details: https://github.com/b-remy/gems/pull/4/commits/e1c0850b6fbfa5415ac56404a6506216da708861

Here are the results of that script using tensorflow convolutions:

S/N: 80162.7
R11: 0.351112
m: -0.000152554 +/- 0.000409342 (99.7% conf)
c: 1.00323e-06 +/- 4.13362e-06 (99.7% conf)
b-remy commented 2 years ago

Last commit 4d3d23d validates that the convolution with GalFlow is working well, as long as we consider even stamp sizes.

With GalFlow:

S/N: 84713.4
R11: 0.401513
m: 7.92014e-06 +/- 0.000347655 (99.7% conf)
c: -9.48739e-08 +/- 3.49117e-06 (99.7% conf)

With Galsim

S/N: 84713.9
R11: 0.401513
m: 1.01552e-05 +/- 0.000347654 (99.7% conf)
c: -4.69012e-08 +/- 3.49115e-06 (99.7% conf)

The difference with your fix @EiffL is that I now have a real image of galaxy as input of the convolution.

Now I am going to try with toymodel0 and test the shear.

b-remy commented 2 years ago

Note that the same computation is worse in GalFlow when using an Exponential light profile instead of a Gaussian: GalFlow:

S/N: 79324.2
R11: 0.354662
m: -0.00676094 +/- 0.000411014 (99.7% conf)
c: -1.03561e-07 +/- 4.19112e-06 (99.7% conf)

Galsim

S/N: 80182.9
R11: 0.351485
m: -1.51384e-05 +/- 0.000412506 (99.7% conf)
c: -4.61679e-08 +/- 4.20315e-06 (99.7% conf)
EiffL commented 2 years ago

Nice !!! Yeah it's possible that performance degrades with exponential. I guess the profile is more peaky, which means you need to sample on a finer grid to do the deconvolution.

On Fri, Dec 10, 2021, 4:48 PM Benjamin Remy @.***> wrote:

Note that the same computation is worse in GalFlow when using an Exponential light profile instead of a Gaussian: GalFlow:

S/N: 79324.2 R11: 0.354662 m: -0.00676094 +/- 0.000411014 (99.7% conf) c: -1.03561e-07 +/- 4.19112e-06 (99.7% conf)

Galsim

S/N: 80182.9 R11: 0.351485 m: -1.51384e-05 +/- 0.000412506 (99.7% conf) c: -4.61679e-08 +/- 4.20315e-06 (99.7% conf)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/b-remy/gems/pull/4#issuecomment-991082011, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGSLF7PIFQX33MW5UPSSETUQIONVANCNFSM5IELW22A .

b-remy commented 2 years ago

Now that we have evaluated the impact of the convolution, let's check the impact of the shear.

toymodel0.py generates Gaussian light profiles (as I did in ngmix/metacal.py, for fair comparison), and performs the convolution with the PSF with the validated tensorflow functions for odd stamp sizes. The metacal0.py script then returns:

sigma_n: 1e-06
Metacalibration
----------------------
S/N: 84834.2
R11: 0.40136
shear [ 1.0685088e-02 -1.6877929e-06]
shear true [0.01, 0.0]
m: 0.0685088 +/- 0.000416164 (99.7% conf)
c: -1.68779e-06 +/- 3.95437e-06 (99.7% conf)

compared to galsim shearing in ngmix/metacal.py:

S/N: 84713.4
R11: 0.401513
m: 7.92014e-06 +/- 0.000347655 (99.7% conf)
c: -9.48739e-08 +/- 3.49117e-06 (99.7% conf)

So the shearing also induces an error.

To summarize, this tests tell us that we need to:

What I propose is to handle this points in separated issues and pull requests.

b-remy commented 2 years ago

The aim of this pull request was to build utils functions to handle data and metacalibration scripts that can take it as input. The tests above were made under this configuration.

So I guess this pull request can be merged and do the following in separated issues.

EiffL commented 2 years ago

hummmmmmmm two comments:

EiffL commented 2 years ago

You can set the unit tests up in pretty much the same way as in galflow, with github actions, so that they will run automatically on PRs.