This builds on the read/write of Node data to files @diegostruk1 introduced in PR #35.
The key changes are in pyworkflow/workflow.py and are:
Moves (most) file-handling to the Workflow object, using the FileStorage API. Node data is written using store_node_data() and read using retrieve_node_data()
Adds an execute(node_id) method to the Workflow that orchestrates Node execution. This works by
Retrieving a node_to_execute specified by node_id
Retrieving data from any preceding Nodes
Passing that data to the Node's execute() function
Writing that output to a file and storing the filename in the Node's data attribute
Includes some basic error-checking/exception-handling, but not a lot. You can test execution using sample_execution.zip.
Move join1.csv and join2.csv to /tmp
Load execution_workflow.json into Postman
Manually execute Nodes 1-4. You should see the file names output to screen, and the "Retrieve data" test should return the data written to /tmp.
Edit: latest commits add more error-checking/handling. Namely, endpoints should now return a 400/500 code with message for missing data files/improper execution for most cases.
This builds on the read/write of Node data to files @diegostruk1 introduced in PR #35.
The key changes are in
pyworkflow/workflow.py
and are:store_node_data()
and read usingretrieve_node_data()
execute(node_id)
method to the Workflow that orchestrates Node execution. This works bynode_to_execute
specified bynode_id
execute()
functiondata
attributeIncludes some basic error-checking/exception-handling, but not a lot.You can test execution using sample_execution.zip.join1.csv
andjoin2.csv
to/tmp
execution_workflow.json
into Postman/tmp
.Edit: latest commits add more error-checking/handling. Namely, endpoints should now return a 400/500 code with message for missing data files/improper execution for most cases.