leftfield-geospatial / homonim

Correct drone, aerial and satellite imagery to surface reflectance.
https://homonim.readthedocs.io/
GNU Affero General Public License v3.0
18 stars 0 forks source link

Bleeding Effects of Buildings on Surrounding Areas of the Corrected Surface Reflected Image and Questions regarding Panels #5

Open fakurten opened 1 month ago

fakurten commented 1 month ago

Hello,

I have an aerial lidar dataset of 8 bit and 16 bit 4 band stack (R, G, B and NIR) with a resolution of 15 cm and we are trying to correct it using your algorithm. When deploying your algorithm using Sentinel 2 satellite imagery we experienced bleeding effects of white rooftops on nearby pixels like in the following image:

image

You can see that some part of the nearby parking lot is getting considerably brighter next to the building compared to the rest. Is there any way I can mitigate that effect? Do I increase/decrease the kernel size, change the model or other parameters when using homonim? Do I need to pan-sharpen the satellite imagery to get better results? The process I used was the following:

src_path = "test_source.tif"
ref_path = "20240410T174909_20240410T175332_T13SCU.tif"
corr_path = "test_corr.tif"
with RasterFuse(src_path, ref_path, src_bands=[1,2,3,4],ref_bands=[4,3,2,8]) as raster_fuse:
        raster_fuse.process(corr_path, Model.gain_blk_offset, (5, 5), overwrite=True)

For reference, I followed the tutorials shown in the Documentation and used an orthorectified image for the source, and used a satellite image that met the requirements of co-location, concurrency and is spectrally similar to my source image as I am using an image that was taken with an Ultracam Eagle Mark 4.

I also wanted to ask, what could be the best practice to deploy your algorithm? I used both types of image (8 bit and 16 bit) that I have using the same reference satellite image and I am not sure what's the best way to scale them after correction since it's a Float32. I scaled them using gdal_translate using the min and max values for each band to get our final 8 bit and 16 bit images. Since the images I have we have placed reflectance panels on them and I have seen that the pixel values in the panels don't coincide or gets close to the true values of the reflectance panels (using a scaled corrected image Float32 from 0 to 1):

image

This image shows the panels we have placed but the values that should be displayed should be around 0.02-0.11 or somewhat in the range since the true value of the panel is of 0.065 +- 0.05. Same thing can be seen on the next panels like in the following image:

image

The highlighted panel should be displaying the reflectance of around 0.56 +- 0.05 but you can see that is off around 0.1. Therefore, I am not sure if my procedure of rescaling the corrected image from the homonim output is correct or is it more something related to the algorithm.

The rescaling formula I used is basically using gdal but through python:

def image_rescaling(input_file,unit,output_file):
    '''Rescales the image using GDAL automatically just providing the input file, the unit to which the image has to be scaled and the output file name.'''
    assert type(unit) == str, f"Unit has to be a string not {type(unit)}."
    assert unit in ['Byte','UInt16','Float32'], f'Unit has to be either "Byte", "UInt16" or "Float32", instead it was passed "{unit}".'

    max_val = 0
    if unit == "Byte":
        max_val = 2**8 - 1
    elif unit == "UInt16":
        max_val = 2**16 - 1
    else:
        max_val = 1

    array = skimage.io.imread(input_file)
    r_min_max = (np.min(array[:,:,0]),np.max(array[:,:,0]))
    g_min_max = (np.min(array[:,:,1]),np.max(array[:,:,1]))
    b_min_max = (np.min(array[:,:,2]),np.max(array[:,:,2]))
    nri_min_max = (np.min(array[:,:,3]),np.max(array[:,:,3]))

    cli = f'''gdal_translate -of GTiff -ot {unit} -scale_1 {r_min_max[0]} {r_min_max[1]} 0 {max_val} -scale_2 {g_min_max[0]} {g_min_max[1]} 0 {max_val} -scale_3 {b_min_max[0]} {b_min_max[1]} 0 {max_val} -scale_4 {nri_min_max[0]} {nri_min_max[1]} 0 {max_val} {input_file} {output_file}'''
    os.system(cli)       

My next test would probably be pan-sharpening the satellite imagery to later run your algorithm to get better results, but would be interested in knowing your thoughts.

Many thanks in advance,

Federico Kurten

dugalh commented 1 month ago

Hello.

A sliding kernel is used for estimating correction parameters, which means some bleeding over edges is unavoidable. Misalignment, and land cover changes between source and reference can also result in bleeding effects. You could try a larger kernel shape (like 9x9) which should be less sensitive to this kind of source - reference difference. The image you shared looks pretty good though - the bleeding is not obvious to me.

The scale of correction is roughly the size of the kernel (e.g. 5 x 5 @ 10m Sentinel resolution = 50m x 50m). Correction of objects smaller than this will be affected by surrounding pixels. So I wouldn't expect the surface reflectance on the panels to be accurate. Also, bear in mind that the object needs to be present in the reference image for it to be corrected accurately, and accuracy is limited by that of the reference image which is itself an approximation.

The reflectance scaling of the corrected image should more or less match that of the reference image. E.g. with these Sentinel-2 images, reflectance is scaled 0-10000. You could use the out_profile argument to configure the corrected image as uint16 in this case. Otherwise, using gdal_translate something like what you are doing, but with fixed ranges should work too. If multiple source images are being corrected with the same reference, it will be faster to scale the reference, and configure the corrected dtype with out_profile.

Higher resolution reference imagery will help, so long as the alignment between source and reference is good. I don't think Sentinel-2 has a pan band for sharpening though.