Closed mlin closed 4 years ago
To see what's going on it's probably easier to browse the tree than the PR diff: https://github.com/chanzuckerberg/idseq-workflows/tree/mlin-generate-task-tests/tests/host_filter/tasks
Now with README: https://github.com/chanzuckerberg/idseq-workflows/tree/mlin-generate-task-tests/tests
Marked 3 cases as expected fail (xfail) while we work through how to adapt the S3 interactions.
@kislyuk @katrinakalantar @morsecodist any feedback on whether this seems like a feasible skeleton for adding more test cases+assertions? (Start from the readme and task cases directory as the diff view isn't navigable)
@mlin the test structure and fixtures/helpers look good. Do you have any guidance for finishing test coverage for all the steps?
I agree that the structure looks good!
Using this script which runs the workflow to completion on given test inputs, then records all the intermediate inputs & outputs and generates a pytest case for each task, which runs the task individually and then compares its actual & expected outputs. A bunch of pytest fixtures minimize the amount of boilerplate required in each test case. The script functionality is pretty interesting and might eventually make its way into a top-tier miniwdl tool (needs further generalization).
The current host_filter cases are derived from one of the synthetic inputs from the benchmarking paper ("bench3"). More can be slotted in later. Current warts: