Open mike-gangl opened 6 months ago
Yeah filtering and aggregation on the server would be ideal
@rtapella and @mike-gangl ping for status please
We've got a dev version that does:
status
execute
info
jobs
but they are limited to the 'applications' that are deployed and ready within. airflow environment. we are unable to deploy an application entirely through the OGC endpoints as they need a 'DAG' sitting ont he airflow shared disk to map the deployed processed on to.
@drewm-jpl or @jpl-btlunsfo were working on this previously, but i'm not sure where we stand. My concern is that needing to know airflow DAGS is not a great route to go. My preference would be to pre-register the cwl-dag as an OGC process that we can throw any set of CWL and inputs into and execute it to get us going. this actually makes "deploy" uneeded in the near term.
@mike-gangl so if we use cwl-dag, the "deploy" would effectively be on-the-fly as part of the job input-parameters, rather than deploying an App Package itself on the ADES?
@rtapella correct- there will be no "deploy" method for the generic runner; well, there may need to be a step where we dpeloy the cwl-dag runner as an OGC process, but then you'd call that with cwl_workflow
(the url to your cwl file in dockstore, github, etc) and the cwl_arguments (url to json/yml or string json) to execute the job.
Add client api work into various OGC resources for:
Note, there are differences between the OGC Processes API and WPS. Mainly the "jobs" endpoint is no longer
/processes/{process-id}/jobs
. There is only a "/jobs" endpoint, and at some point it should be parameterized to accept a variety of parameters (e.g. process_id, status) to enable some filtering on the results.We can do this filtering on the client side, but that seems pretty gross and not needed right now, but either the filtering on the backend or front end will be needed at some point.