aziz-ayed / denoising

Research to replace the Wavelet approach in the denoising task of the MCCD method by a Machine Learning solution
2 stars 0 forks source link

Bias in the ellipticities #4

Open aziz-ayed opened 3 years ago

aziz-ayed commented 3 years ago

Here I am recording some first results of the denoising algorithm.

Description

We trained baseline U-Nets and Learnlets Models on the denoising task for the MCCD algorithm. We generated a 25 000 64x64 images dataset, preprocessed with Gaussian Noise, with these parameters:

We used 20 000 images for training and 5 000 for testing. The Learnlets parameters are n_tiling = 64, n_scales = 5, optimized with Adam and lr=1e-3. This results in 16 125 trainable parameters. The Unets parameters are kernel_size = 3, and layers_n_channels = [4, 8, 16, 32], optimized with Adam and lr=1e-4. This results in 33 653 trainable parameters.

Examples of the different results:

U-Nets Denoising Learnlets Denoising

Result table

The results in terms of RMSE are presented in the following table:

Model SNR Train RMSE Test RMSE e1 RMSE e2 RMSE R2 RMSE
U-Nets 30 6.0978e-05 6.7784e-05 6.9762e-03 6.1692e-03 8.4194e-03
Learnlets 30 9.3257e-05 1.1452e-04 1.1648e-02 7.5156e-03 1.7424e-02

Histograms of the differences in ellipticities are presented hereinafter (bins = 40). We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).

As we can see from the histograms, both models produce a bias with respect to the ellipticities' parameters.

The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.

Model e1 error e2 error R2 error
U-Nets -5.0125e-03 -1.2518e-03 -1.0665e-03
Learnlets -1.4860e-02 8.0475e-04 5.6255e-03