Closed soxofaan closed 1 year ago
I'm starting to think that it will become quite complex to implement this "smart" logic in the current traditional caching layer approach (first check cache and (re)calculate if not available or outdated). As suggested by @m-mohr it's probably easier to work with a background job that regularly regenerates all necessary metadata and pushes this to central storage, to be used by all workers. The logic for workers is than limited to fetching the metadata, without need for complicated caching rules and other mechanisms.
design somewhat similar to jobtracker
This ticket is roughly done:
openeo-aggregator-prime-caches
to prime caches, e.g. docker style: docker run --rm -e OPENEO_AGGREGATOR_CONFIG=/home/openeo/aggregator/conf/aggregator.dev.py openeo-aggregator:latest openeo-aggregator-prime-caches
Still to do as follow up:
I think we can close this now.
Apart from the implementation of the openeo-aggregator-prime-caches
tool in openeo-aggregator project, I also had to set it up in nifi for scheduled runs and had to spend some time to get logging working in kibana
Spin-off from #2