Here I am recording the results of the denoising algorithm on generalisation attempts.
Description
After training baseline U-Nets and Learnlets Models on the denoising task for the MCCD algorithm, we wanted to see how they generalised, especially for different noise levels.
We generated a 45 00064x64 images dataset, preprocessed with Gaussian Noise, with these parameters:
15 values of e1 with -0.15 ≤ e1 ≤ 0.15
15 values of e2 with -0.15 ≤ e2 ≤ 0.15
8 values of R2 with 2.5 ≤ R2 ≤ 8
19 values of SNR with 10 ≤ SNR ≤ 200, and a 20th bin with SNR = 0
To estimate the different "real-life" noise levels, we built a tool to retrieve the SNR of an image, and we used it to measure the SNR distribution on 535 417 stars from CFIS data. The detailed methodology can be found in the notebooks folder of this Git. We added a 0 SNR bin to train the model on reconstructing the shape and size of the stars.
We used 36 000 images for training and 9 000 for testing.
Models
The Learnlets parameters are n_tiling = 64, n_scales = 5, optimized with Adam and lr=1e-3, with 500 epochs and 200 steps per epoch. This results in 16 125 trainable parameters.
The Unets parameters are kernel_size = 3, and layers_n_channels = [4, 8, 16, 32], optimized with Adam and
lr=1e-4, with 500 epochs and 200 steps per epoch. This results in 33 653 trainable parameters.
Examples of the different results:
U-Nets reconstruction of small stars with and without noise
Learnlets reconstruction of small stars with and without noise
U-Nets reconstruction of big stars with and without noise
Learnlets reconstruction of big stars with and without noise
Result table
The results in terms of RMSE are presented in the following table:
Model
SNR
Train RMSE
Test RMSE
e1 RMSE
e2 RMSE
R2 RMSE
U-Nets
0-200
5.349e-05
6.086e-05
6.821e-03
4.510e-03
8.081e-03
Learnlets
0-200
7.377e-05
9.954e-05
1.152e-02
6.186e-03
1.548e-02
Histograms of the differences in ellipticities are presented hereinafter (bins = 40).
We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).
For R2 measures, we used the formula (True R2 - Obs R2)/True R2, to take into account the relative amplitude of the observations in the error measurement. However, some very small values of e1 and e2 would make this formula explode, thus we simply used the difference.
As we can see from the histograms, both models still produce a bias with respect to the ellipticities' parameters.
The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.
Here I am recording the results of the denoising algorithm on generalisation attempts.
Description
After training baseline U-Nets and Learnlets Models on the denoising task for the MCCD algorithm, we wanted to see how they generalised, especially for different noise levels. We generated a
45 000
64x64
images dataset, preprocessed with Gaussian Noise, with these parameters:-0.15 ≤ e1 ≤ 0.15
-0.15 ≤ e2 ≤ 0.15
2.5 ≤ R2 ≤ 8
10 ≤ SNR ≤ 200
, and a 20th bin withSNR = 0
To estimate the different "real-life" noise levels, we built a tool to retrieve the SNR of an image, and we used it to measure the SNR distribution on
535 417
stars from CFIS data. The detailed methodology can be found in thenotebooks
folder of this Git. We added a 0 SNR bin to train the model on reconstructing the shape and size of the stars.We used
36 000
images for training and9 000
for testing.Models
The Learnlets parameters are
n_tiling = 64
,n_scales = 5
, optimized withAdam
andlr=1e-3
, with 500 epochs and 200 steps per epoch. This results in 16 125 trainable parameters. The Unets parameters arekernel_size = 3
, andlayers_n_channels = [4, 8, 16, 32]
, optimized withAdam
andlr=1e-4
, with 500 epochs and 200 steps per epoch. This results in 33 653 trainable parameters.Examples of the different results:
Result table
The results in terms of RMSE are presented in the following table:
Histograms of the differences in ellipticities are presented hereinafter (
bins = 40
).We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).
For R2 measures, we used the formula
(True R2 - Obs R2)/True R2
, to take into account the relative amplitude of the observations in the error measurement. However, some very small values of e1 and e2 would make this formula explode, thus we simply used the difference.As we can see from the histograms, both models still produce a bias with respect to the ellipticities' parameters.
The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.