Writing UDFs can be cumbersome when your development cycle involves submitting your process graph to a real openEO backend.
Local UDF processing shortens the development cycle, but that is typically done with throw away experiments.
Even better would be to leverage unit tests, e.g. with test functions directly included the UDF python file.
In the python client we could provide some helpers to generate dummy data.
e.g. something like
def apply_datacube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
...
def test_apply_datacube():
cube = produce_dummy_data(..., desired bands, temporal resolution, ....)
result = apply_datacube(cube)
# check actual results against expectations
The user can then easily run the tests with something like pytest my_udf.py.
And maybe backends can even provide services to run the tests in their runtime environment
Writing UDFs can be cumbersome when your development cycle involves submitting your process graph to a real openEO backend.
Local UDF processing shortens the development cycle, but that is typically done with throw away experiments. Even better would be to leverage unit tests, e.g. with test functions directly included the UDF python file.
In the python client we could provide some helpers to generate dummy data.
e.g. something like
The user can then easily run the tests with something like
pytest my_udf.py
.And maybe backends can even provide services to run the tests in their runtime environment