In order to reproduce real-life conditions, we wanted to test our models on data that were unseen during training. Thus, we generated a 24 75064x64 images dataset, preprocessed with Gaussian noise.
To do so, we took 15 values of e1 and e2 between with -0.15 ≤ e1, e2 ≤ 0.15, 10 values of R2 2.5 ≤ R2 ≤ 8, and different noise values with 10 ≤ SNR ≤ 200.
Then, we took out certain values the extremum values (-0.15, 0.15 and 10, 190, 200), and random in-between values in the e1, e2, and SNR arrays. We generated a training dataset composed of 24 75064x64 images, based on the reduced e1, e2, and SNR arrays, and we used the taken-out coefficients to generate a 3 00064x64 test dataset.
Thus, we tested our model on generated images that had both unseen sizes and noise levels.
Models
The Learnlets parameters are n_tiling = 64, n_scales = 5, optimized with Adam and lr=1e-3, with 500 epochs and 200 steps per epoch. This results in 16 125 trainable parameters.
In comparison to the previous Learnlet model, we added dynamic thresholding. Thus, we built a noise estimator that we included in the model as the estimated noise level of the image is a necessary output for dynamic thresholding.
The Unets parameters are kernel_size = 3, and layers_n_channels = [4, 8, 16, 32], optimized with Adam and
lr=1e-4, with 500 epochs and 200 steps per epoch. This results in 33 653 trainable parameters.
Insert Loss graphs
Examples of the different results:
U-Nets reconstruction of small stars with and without noise
Learnlets reconstruction of small stars with and without noise
U-Nets reconstruction of big stars with and without noise
Learnlets reconstruction of big stars with and without noise
Result table
The results in terms of RMSE are presented in the following table:
Model
SNR
Train RMSE
Test RMSE
e1 RMSE
e2 RMSE
R2 RMSE
U-Nets
0-200
4.983e-05
5.879e-05
5.360e-03
4.446e-03
7.287e-03
Learnlets
0-200
3.451e-05
5.813e-05
4.443e-03
4.574e-03
8.648e-03
Histograms of the differences in ellipticities are presented hereinafter (bins = 40).
We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).
For R2 measures, we used the formula (True R2 - Obs R2)/True R2, to take into account the relative amplitude of the observations in the error measurement. However, some very small values of e1 and e2 would make this formula explode, thus we simply used the difference.
The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.
Model
e1 error
e2 error
R2 error
U-Nets
5.556e-03
-3.499e-05
1.892e-03
Learnlets
-5.092e-04
6.999e-04
3.209e-03
Our results seem to be showing that the Learnlets with dynamic thresholding perform better that the U-Nets when it comes to generalising our problem.
Further generalisation attempts.
Description
In order to reproduce real-life conditions, we wanted to test our models on data that were unseen during training. Thus, we generated a
24 750
64x64
images dataset, preprocessed with Gaussian noise. To do so, we took 15 values of e1 and e2 between with-0.15 ≤ e1, e2 ≤ 0.15
, 10 values of R22.5 ≤ R2 ≤ 8
, and different noise values with10 ≤ SNR ≤ 200
. Then, we took out certain values the extremum values (-0.15, 0.15
and10, 190, 200
), and random in-between values in the e1, e2, and SNR arrays. We generated a training dataset composed of24 750
64x64
images, based on the reduced e1, e2, and SNR arrays, and we used the taken-out coefficients to generate a3 000
64x64
test dataset. Thus, we tested our model on generated images that had both unseen sizes and noise levels.Models
The Learnlets parameters are
n_tiling = 64
,n_scales = 5
, optimized withAdam
andlr=1e-3
, with 500 epochs and 200 steps per epoch. This results in 16 125 trainable parameters. In comparison to the previous Learnlet model, we added dynamic thresholding. Thus, we built a noise estimator that we included in the model as the estimated noise level of the image is a necessary output for dynamic thresholding.The Unets parameters are
kernel_size = 3
, andlayers_n_channels = [4, 8, 16, 32]
, optimized withAdam
andlr=1e-4
, with 500 epochs and 200 steps per epoch. This results in 33 653 trainable parameters.Insert Loss graphs
Examples of the different results:
Result table
The results in terms of RMSE are presented in the following table:
Histograms of the differences in ellipticities are presented hereinafter (
bins = 40
).We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).
For R2 measures, we used the formula
(True R2 - Obs R2)/True R2
, to take into account the relative amplitude of the observations in the error measurement. However, some very small values of e1 and e2 would make this formula explode, thus we simply used the difference.The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.
Our results seem to be showing that the Learnlets with dynamic thresholding perform better that the U-Nets when it comes to generalising our problem.