Open jdries opened 2 weeks ago
A buffer of 10m is applied to geometry input, to ensure datacube is large enough: https://github.com/Open-EO/openeo-geopyspark-driver/blob/35b4db997fe4aa4fec49928dde0642f285d12ead/openeogeotrellis/utils.py#L231
This also seems to be confirmed by extents in the logs: point that ends up as 'null' are inside the loaded extent.
Also for global_extent, a 10m buffering is used: https://github.com/Open-EO/openeo-python-driver/blob/22f89cfaa30306115f5479bdfc8b9ea8ebd0f04e/openeo_driver/dry_run.py#L587
Important: the issue only occurs when merge_cubes is present! So the normal aggregate_spatial with points on dem + resampling seems to work just fine.
The wenr features are not resampled at load time, because merge_cubes is not seen as a resampling operation. Supporting that would for sure simplify matters in this case.
Confirmed this by adding an explicit resample_spatial for wenr: that also solves the DEM issue
aggregate_spatial with points seems to allow points at the border to fall off Job example: j-241108dc1e1848248f81c211bdb731e0
The process graph without aggregate spatial illustrates the problem nicely. The dem is loaded in 4326, then resampled to utm, and seems to be clipped somewhere along the way.
Not sure yet if the solution is to simply use larger bboxes, or actually something happening when we merge the cubes...