r-earthengine / ee_extra

A ninja python package that unifies the Google Earth Engine ecosystem.
https://ee-extra.readthedocs.io/
Other
63 stars 10 forks source link

Moving `matchHistogram` and `panSharpen` from `eemont` to `ee_extra` #28

Closed davemlz closed 2 years ago

davemlz commented 2 years ago

Hi, @aazuspan!

@csaybar and I are giving ee_extra the final touch and we want to know if you would accept to move the amazing matchHist and panSharpen resources from eemont to this repo and make them available for the whole ecosystem.

Please let me know if you accept! :)

And if you want, I can move them myself (and also re-arrange the resources in eemont by using ee_extra), just let me know!

Cheers!

Dave

aazuspan commented 2 years ago

Hey @davemlz!

I'd be happy to move those over. I'll assign myself. Any thoughts on where to put them? Seems like matchHistogram would fit in the Spectral module, but not sure about panSharpen.

davemlz commented 2 years ago

Hi, @aazuspan!

That's amazing! @csaybar is going to create a subpackage Algorithms and you can include panSharpen in there! :D

If you need anything else, please let us know!

Cheers!

davemlz commented 2 years ago

Hi, @aazuspan @csaybar!

I already created the Algorithms subpackage :)

Cheers!

aazuspan commented 2 years ago

Thanks @davemlz!

davemlz commented 2 years ago

Hi, @aazuspan!

I've merged the matchHistogram PR, thank you very much! :) Let me know when you add it to eemont! :D

aazuspan commented 2 years ago

Thanks @davemlz, will do :)

aazuspan commented 2 years ago

Hey @davemlz and @csaybar, I was thinking about making a couple changes when I migrate the pan-sharpening methods over and wanted to get some input before I move ahead.

Currently in eemont there are a bunch of different quality assessment algorithms that can be run during pan-sharpening to evaluate sharpening results. Basically, each metric is a callable class that compares two images and returns either a band-wise dictionary or an image-wise value indicating correlation/distortion/bias/etc between the images. This functionality is all hidden inside of panSharpen, so there's no way to access them independently.

To make these accessible to users or other modules, I was thinking I could add a metrics sub-module to the QA module, so that they could be accessed like so:

>>> from ee_extra.QA.metrics import UIQU
>>> # Returns an ee.Dictionary object
>>> quality = UIQI(img1, img2)
>>> quality.getInfo()

{"B2": 0.00165, "B3": 0.00095 , "B4": 0.00347}

Does that sound like a good plan?

csaybar commented 2 years ago

Hi @aazuspan I think it is a great idea because we could use this metric module not just for pansharpening but for ML models as well, isn't it?. Maybe is it is possible we can use the same names of other metric packages like torchmetrics. https://torchmetrics.readthedocs.io/en/latest/

aazuspan commented 2 years ago

Yeah, the image metrics could definitely be useful for ML if you needed to compare image similarity, e.g. training a super-resolution model. It doesn't look like there's any crossover between the metrics in torchmetrics and the ones I've implemented, but I'll keep an eye out for other ML libraries and try to stay consistent. I'll also look into the metrics from torchmetrics to see what it would take to add those!

davemlz commented 2 years ago

This (https://github.com/aazuspan/ee_extra/commit/bfe7d0303bdb76e1553089cfdcc12af2b47d0bfa) is beautiful, @aazuspan! Thank you!

Taking this into consideration, I'll open a new Metrics issue where we can keep an update of the metrics we are adding :)