Here I am recording some first results of the denoising algorithm.
Description
We trained baseline U-Nets and Learnlets Models on the denoising task for the MCCD algorithm.
We generated a 25 00064x64 images dataset, preprocessed with Gaussian Noise, with these parameters:
-0.15 ≤ e1 ≤ 0.15
-0.15 ≤ e2 ≤ 0.15
2.5 ≤ R2 ≤ 8
We used 20 000 images for training and 5 000 for testing.
The Learnlets parameters are n_tiling = 64, n_scales = 5, optimized with Adam and lr=1e-3. This results in 16 125 trainable parameters.
The Unets parameters are kernel_size = 3, and layers_n_channels = [4, 8, 16, 32], optimized with Adam and
lr=1e-4. This results in 33 653 trainable parameters.
Examples of the different results:
Result table
The results in terms of RMSE are presented in the following table:
Model
SNR
Train RMSE
Test RMSE
e1 RMSE
e2 RMSE
R2 RMSE
U-Nets
30
6.0978e-05
6.7784e-05
6.9762e-03
6.1692e-03
8.4194e-03
Learnlets
30
9.3257e-05
1.1452e-04
1.1648e-02
7.5156e-03
1.7424e-02
Histograms of the differences in ellipticities are presented hereinafter (bins = 40).
We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).
As we can see from the histograms, both models produce a bias with respect to the ellipticities' parameters.
The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.
Here I am recording some first results of the denoising algorithm.
Description
We trained baseline U-Nets and Learnlets Models on the denoising task for the MCCD algorithm. We generated a
25 000
64x64
images dataset, preprocessed with Gaussian Noise, with these parameters:-0.15 ≤ e1 ≤ 0.15
-0.15 ≤ e2 ≤ 0.15
2.5 ≤ R2 ≤ 8
We used
20 000
images for training and5 000
for testing. The Learnlets parameters aren_tiling = 64
,n_scales = 5
, optimized withAdam
andlr=1e-3
. This results in 16 125 trainable parameters. The Unets parameters arekernel_size = 3
, andlayers_n_channels = [4, 8, 16, 32]
, optimized withAdam
andlr=1e-4
. This results in 33 653 trainable parameters.Examples of the different results:
Result table
The results in terms of RMSE are presented in the following table:
Histograms of the differences in ellipticities are presented hereinafter (
bins = 40
). We use the Galsim module to mesure the ellipticities of the stars after denoising and we compare them to the Galsim-measured ellipticities of the generated stars before noising (we rather use the measured ellipticity than the true one to take into account the bias introduced by the Galsim measure).As we can see from the histograms, both models produce a bias with respect to the ellipticities' parameters.
The following table presents the mean values of the e1, e2, and R2 errors, to quantify this bias.