aziz-ayed / denoising

Research to replace the Wavelet approach in the denoising task of the MCCD method by a Machine Learning solution
2 stars 0 forks source link

Tests on different SNRs #6

Open aziz-ayed opened 3 years ago

aziz-ayed commented 3 years ago

Here I am recording the results of the denoising algorithm on generalisation attempts.

Description

After training baseline U-Nets and Learnlets Models on the denoising task for the MCCD algorithm, we wanted to see how they generalised, especially for different noise levels. We generated a 45 000 64x64 images dataset, preprocessed with Gaussian Noise, with these parameters:

To estimate the different "real-life" noise levels, we built a tool to retrieve the SNR of an image, and we used it to measure the SNR distribution on 535 417 stars from CFIS data. The detailed methodology can be found in the notebooks folder of this Git. We added a 0 SNR bin to train the model on reconstructing the shape and size of the stars.

We used 36 000 images for training and 9 000 for testing.

Models

The Learnlets parameters are n_tiling = 64, n_scales = 5, optimized with Adam and lr=1e-3, with 500 epochs and 200 steps per epoch. This results in 16 125 trainable parameters. The Unets parameters are kernel_size = 3, and layers_n_channels = [4, 8, 16, 32], optimized with Adam and lr=1e-4, with 500 epochs and 200 steps per epoch. This results in 33 653 trainable parameters.

Examples of the different results:

U-Nets reconstruction of small stars with and without noise
unets_nn_small unets_wn_small
Learnlets reconstruction of small stars with and without noise
learnlets_nn_small learnlets_wn_small
U-Nets reconstruction of big stars with and without noise
unets_nn_big unets_wn_big
Learnlets reconstruction of big stars with and without noise
learnlets_nn_big learnlets_wn_big

Result table

The results in terms of RMSE are presented in the following table:

Model SNR Train RMSE Test RMSE e1 RMSE e2 RMSE R2 RMSE
U-Nets 0-200 5.349e-05 6.086e-05 6.821e-03 4.510e-03 8.081e-03
Learnlets 0-200 7.377e-05 9.954e-05 1.152e-02 6.186e-03 1.548e-02

Histograms of the differences in ellipticities are presented hereinafter (bins = 40).

We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).

For R2 measures, we used the formula (True R2 - Obs R2)/True R2, to take into account the relative amplitude of the observations in the error measurement. However, some very small values of e1 and e2 would make this formula explode, thus we simply used the difference.

As we can see from the histograms, both models still produce a bias with respect to the ellipticities' parameters.

The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.

Model e1 error e2 error R2 error
U-Nets -5.588e-03 1.816e-03 4.804e-03
Learnlets -1.032e-02 1.871e-03 4.839e-03