cogeotiff / rio-tiler-pds

Rio-Tiler plugin for public datasets
https://cogeotiff.github.io/rio-tiler-pds/
BSD 3-Clause "New" or "Revised" License
46 stars 8 forks source link

azimuth line and pixelcount of extracted tile/point #32

Open Blockchainforcommons opened 3 years ago

Blockchainforcommons commented 3 years ago

Hi,

As Sentinel-1 data requires pre-processing, e.g. radiometric calibration, thermal noise reduction, speckle filtering, we need to use the metadata files, delivered in the productinfo. As rio-tiler takes a part of the entire image, to derive the metadata, we need to know the Azimuth line and pixelcount of the extracted tile/point.

Does rio-tiler provide these, and if not, could you help me figure out how to get these in a convenient way. It would then good to add this feature to the tiler.

thanks,

Carst

more info on: https://sentinel.esa.int/documents/247904/685163/S1-Radiometric-Calibration-V1.0.pdf

vincentsarago commented 3 years ago

thanks @Blockchainforcommons, I didn't realized the Sentinel-1 data needed more processing 🤦

DO you think the pre-processing should be done by default by rio-tiler-pds? I don't have time to go through the whole PDF but if you can provide

I could add this in the sentinel-1 reader

Basicaly we need to

  1. fetch the files ~here https://github.com/cogeotiff/rio-tiler-pds/blob/master/rio_tiler_pds/sentinel/aws/sentinel1.py#L60
  2. if it's a simple calculation we can add a post_process callback like https://github.com/cogeotiff/rio-tiler-pds/blob/master/rio_tiler_pds/landsat/aws/landsat8.py#L73
Blockchainforcommons commented 3 years ago

Hi Vincent, yes a lot of pre-processing is required with SAR like readers, however the specifics depends on the use case. Another calibration is wanted when i use the output for crop yield estimation, detect forest fires or find out how ice caps are melting.

Adding all these options is possible, but not really what is the goal of rio-tiler i think? It it is a great tool to extract data from a part of the scene, and without the azimuth and pixelcount data, the amplitude array we are now getting is much less valuable.

For example, if i want to calculate the VV in decibel, which the standard way of representing SAR data, i need to perform:

10 x log((A 2) / ((coefficient) 2))

where A is the amplitude value we fetch now.

ie:

aws s3 cp s3://sentinel-s1-l1c/GRD/2020/10/28/IW/DV/S1A_IW_GRDH_1SDV_20201028T003207_20201028T003232_034989_0414CB_EE56/annotation/calibration/calibration-iw-vh.xml . --request-payer requester

in there we find a list of calibrationVectors, and we lookup our value based on the Azimuth Line and the pixelcount of our location.

The azimuth line and pixelcount is just the X,Y pixel value of the tile. So if the tiff has 10.000 x 10.000 pixels, and the extracted point is at pixel 3500 x 4000. then the azimuth line is 3500 and the pixelcount is 4000. in the annotation file, you then lookup the closest line/count to the extracted point.

If you want rio-tiler to also return the data in Decibel, you can fetch the annotation files, find the coefficient values by pixel, and perform the calculations. depending on the option (sigma/gamma/beta) a user requires.

Now this is just calibration, there is then also speckle filter, terrain modifier, and more. But i think calibration is the most needed. Also, you can see sentinel-hub delivering just this:

See under Units: https://docs.sentinel-hub.com/api/latest/data/sentinel-1-grd/

You do have to make this only available for GRD scenes, as SLC require other steps beforehand.

To summarize,

getting the Azimuth and Pixelcount are standard and very important to use Sentinel-1 data. If you want to give the user more features, then the 'howevers and buts' expand fast.

Let me know what i can do,

Glad to help you out,

Carst

vincentsarago commented 3 years ago

@Blockchainforcommons thanks, that's really useful! Getting the info for tile might be a bit complicated because we reproject the data and we might not be able to know which pixel/line of the original file we are using 🤔

If we find a way to retrieve the correct values, we might be able to use the new ImageData class (introduced in rio-tiler==2.0.0rc1` to hold those values and then let the user do what ever needed.

from rio_tiler_pds.sentinel.aws import S1L1CReader

with S1L1CReader(...) as scene:
    # fetch `productinfo`
    # fetch `*.xml`
    # maybe construct an`image like` data form the XML values

    # in addition to the `data`, the `S1L1CReader` will add (within the ImageData class) the values from the xml
    data = scene.tile(...)
    assert data.data
    assert data.gamma  # should be the same shape as data

    # we can use new ImageData method to apply operation on the data array
    img = data.apply_calibration("sigmaNought")  # available: sigmaNought , betaNought, gammaNought

🤷‍♂️, @Blockchainforcommons do you know any python libs that work with sentinel-1 GRD we could 👀 (or import) ?

Now this is just calibration, there is then also speckle filter, terrain modifier, and more. But i think calibration is the most needed.

Those might be a bit complex to implement, do you know any python libs that are able to do this (specifically the terrai modifier).

Blockchainforcommons commented 3 years ago

Getting the info for tile might be a bit complicated because we reproject the data and we might not be able to know which pixel/line of the original file we are using

Is that also the case for point?

If so, we can get the bbox and the image size of the scene and estimate the pixel location.

Those might be a bit complex to implement, do you know any python libs that are able to do this (specifically the terrai modifier).

Yes, take a look at Snappy (make sure you have the good one) and PyroSAR, like this:

https://forum.step.esa.int/t/using-snappy-for-sentinel1-grd-preprocessing/19684

not sure if the SNAP software has to be installed before using Snappy.

Blockchainforcommons commented 3 years ago

@vincentsarago any update, e.g. about the pixel/line retrieval from the original file?

vincentsarago commented 3 years ago

Well right now I don't have time to work on this, I'm sorry! I may have time to review PR if you want to work on it!

Blockchainforcommons commented 3 years ago

@vincentsarago no problem! It got me thinking. From the metadata, we can get the image size, and there are also these, which strike me as the corner in latlng.

first_far_lat 13.488847732543945 float64 deg
first_far_long 78.43860631113749 float64 deg
last_near_lat 11.533498764038086 float64 deg
last_near_long 80.4279174213898 float64 deg
last_far_lat 11.980189323425293 float64 deg
last_far_long 78.14514937886506 float64 deg

Since we know the latlng of the tile/point, we can get the azimuth/count from that. Am i right?