lanha / DSen2

Super-Resolution of Sentinel-2 Images: Learning a Globally Applicable Deep Neural Network
GNU General Public License v3.0
232 stars 70 forks source link

Reproduction of results #42

Open davidiagraid opened 1 year ago

davidiagraid commented 1 year ago

Hello,

I started playing with the models you provided for Super-Resolution, first trying to reproduce the results from the article Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network about the method. The approach is the following : I took 4 images of size (110 `km)^2 directly from the testing set shared in the git repository ( London : S2A_MSIL1C_20170522T110621_N0205_R137_T30UXC_20170522T110912 El Salvador : S2A_MSIL1C_20170419T155901_N0204_R097_T16PEU_20170419T161354 Japan City : S2A_MSIL1C_20170216T015741_N0204_R060_T52SFB_20170216T015924 NZ mountains : S2A_MSIL1C_20170616T223701_N0205_R072_T59GLL_20170616T223702 )

For the T_{2x} model, I downsampled the 10 and 20m GDS bands (A and B bands) to respectively 20 and 40m GSD, and then I applied the DSen220 function in your code to these (which corresponds to the inference with the trained model) to get the B bands super resoluted. Then, I computed the band wise root mean squared error between the super resoluted (20m GSD) B bands and the Ground truth 20m GSD B bands. I did the analog for the S{6x} model, as described in the paper.

As a result, I obtained this : (in orange : the RMSE for Bicubic and in blue, the RMSE for DSen2) RMSE_test_images .The DSen2 model performs slightly better than the bicubic interpolation (at best 1.1 lower RMSE than Bicubic). My problem is that I cannot get the same results as in the paper (3 to 4 times lower RMSE than Bicubic). Is there something I missed in the pipeline ?

Also, in the Demo code, you compare the super-resoluted B bands with a ground truth called imGT. Is it a 10m GSD ground truth (in that case, is there a specific way to get this resolution ground truth for the B bands ?), or all the input images (im10 and im20) have been downsampled before ?

lanha commented 1 year ago

Hello, I suspect the issue comes from: "I downsampled the 10 and 20m GDS bands (A and B bands) to respectively 20 and 40m GSD". How did you perform this? Have you tried this function: https://github.com/lanha/DSen2/blob/master/utils/patches.py#L353

is there a specific way to get this resolution ground truth for the B bands ?),

no

or all the input images (im10 and im20) have been downsampled before ?

yes, you are correct, all have been downsampled before.

davidiagraid commented 1 year ago

How did you perform this? Have you tried this function:

Yes I used the downPixelAggr function you premade for this. The main part of my code is :

` data360 = downPixelAggr(data60_gt, SCALE=6) data60 = downPixelAggr(data10_gt, SCALE=6) data120 = downPixelAggr(data20_gt, SCALE=6) ##Downsampling 10m,20m,60m to 60m,120m,360m print("Super-resolving the 360m downsampled data into 60m bands") sr60 = DSen2_60(data60, data120, data360, deep=False)

print("Super-resolving the 40m downsampled data into 20m bands")
data40 = downPixelAggr(data20_gt, SCALE = 2)
data20 = downPixelAggr(data10_gt, SCALE=2) ##Downsampling 10m,20m to 20m,40m
sr20 = DSen2_20(data20, data40, deep=False)
for i in range(np.shape(sr20)[-1]):  
    rmse20 = RMSE(sr20[:,:,i], data20_gt[:,:,i]) #Computing the RMSE per band between the Ground truth and the super resoluted image from the model

SR_bicubic_20 = imresize(data40,output_shape = np.shape(data20_gt)) #Computing the bicubic upsampling
for i in range(np.shape(sr20)[-1]):
    metrics['Bicubic_20'][data20m[i]] = RMSE(SR_bicubic_20[:,:,i], data20_gt[:,:,i]) #Computing the RMSE between the Bicubic upsampled image and the ground truth
`

#yes, you are correct, all have been downsampled before.
Thank you, it makes more sense now
lanha commented 1 year ago

I don't see anything abnormal in the snippet you shared.