My suggestion is to create the package scripts in forestTIME and then create a secondary repo that is just an updater.
The updater repo should --> a Dockerfile implementing a script to 1) download from DataMart into raw-tables.duckdb 2) generate derived-tables.duckdb 3) possibly export this to a more general-purpose format, e.g. .csv 4) push tables to Zenodo using REST API.
Then the updater repo can be configured with GH actions to run on a schedule.
My suggestion is to create the package scripts in forestTIME and then create a secondary repo that is just an updater.
The updater repo should --> a Dockerfile implementing a script to 1) download from DataMart into raw-tables.duckdb 2) generate derived-tables.duckdb 3) possibly export this to a more general-purpose format, e.g. .csv 4) push tables to Zenodo using REST API.
Then the updater repo can be configured with GH actions to run on a schedule.