Closed Kami closed 5 years ago
Confirmed it's working.
Here is an example output:
failed: false
return_code: 0
stderr: ''
stdout: '1..30
ok 1 load command works and is idempotent (setup: 0s, test: 4s)
ok 2 SETUP: reinstall the examples pack and set actionrunner.stream_output to True (setup: 0s, test: 31s)
ok 3 st2 execution tail works correctly for simple actions (setup: 0s, test: 46s)
ok 4 st2 execution tail works correctly for action chain workflows (setup: 0s, test: 33s)
ok 5 st2 execution tail command works correctly for Mistral workflows # skip Mistral not available, skipping tests (setup: 0s, test: 0s)
ok 6 st2 execution tail command works correctly for Orquesta workflows (setup: 0s, test: 30s)
ok 7 st2 execution list include attributes works (setup: 0s, test: 0s)
ok 8 st2 execution list include nonexistent attribute errors (setup: 0s, test: 1s)
ok 9 st2 action list include attributes works (setup: 0s, test: 0s)
ok 10 st2 action list include nonexistent attribute errors (setup: 0s, test: 1s)
ok 11 default note in execution list (setup: 0s, test: 0s)
ok 12 default note in trace list (setup: 0s, test: 1s)
ok 13 default note in trigger instance list (setup: 0s, test: 1s)
ok 14 default note in rule list (setup: 0s, test: 0s)
ok 15 default note in rule enforcement list (setup: 0s, test: 0s)
ok 16 default note in key/value list (setup: 0s, test: 0s)
ok 17 note when action execution limit is 1 (setup: 0s, test: 1s)
ok 18 note when trace limit is 1 (setup: 0s, test: 1s)
ok 19 note when trigger instance limit is 1 (setup: 0s, test: 1s)
ok 20 note when rule limit is 1 (setup: 0s, test: 1s)
ok 21 note when rule enforcement limit is 1 (setup: 0s, test: 0s)
ok 22 no note on action execution list with JSON/YAML output (setup: 0s, test: 1s)
ok 23 no note on trace list with JSON/YAML output (setup: 0s, test: 0s)
ok 24 no note on trigger instance list with JSON/YAML output (setup: 0s, test: 1s)
ok 25 no note on rule list with JSON/YAML output (setup: 0s, test: 0s)
ok 26 no note on rule enforcement list with JSON/YAML output (setup: 0s, test: 0s)
ok 27 no note on key/value list with JSON/YAML output (setup: 0s, test: 1s)
ok 28 packs.setup_virtualenv without python3 flags works and defaults to Python 2 # skip StackStorm components are already running under Python 3, skipping tests (setup: 11s, test: s)
ok 29 packs.setup_virtualenv with python3 flag works # skip StackStorm components are already running under Python 3, skipping tests (setup: 11s, test: s)
ok 30 python3 imports work correctly # skip StackStorm components are already running under Python 3, skipping tests (setup: 11s, test: s)'
succeeded: true
Which already identified some easy wins - need to skip setup when we skip tests itself, etc.
This pull request updates our code to use bats from my fork which adds per test timing information - https://github.com/bats-core/bats-core/pull/221
I'm all for upstream first and I will also try to get change finished and merged upstream (if they will accept it), but at the moment we don't have time to wait on that.
It could a while and we are running blind at the moment (as far as per-test function timing goes).