We mostly test the sunny cases currently, but should also test for (expected) failing test results.
Example would be iperf's self-test which tests for a nornir result not hitting the required bandwidth.
Related to test-bundles => put test in related self-test
Self-test of infrastructure => create separate test file
What can possibly go wrong:
[x] Nornir-Inventory wrong config
[ ] Mismatch test-bundle data to available hosts, i.e. test-bundle contains host that do not exist in Nornir inventory (or vice versa) => is this test needed? Or only relevant in a later dev stage of nuts?
Nornir task fails (gathering data from hosts) => check if it doesn't already exist (NutsResult)
--> This has already been implemented in the NutsResult.validate() method
(transform result fails)
There are currently no tests that check what happens if:
[x] The user has written the test class wrong, e.g. "- test_class: TestNapalmPingg" instead of "- test_class: TestNapalmPing"
[x] The test class is not implemented yet
[x] The test clas has no index but is implemented
This especially concerns the function "load_module" in yaml_to_test.py. The current tests only cover the sunny cases.
We mostly test the sunny cases currently, but should also test for (expected) failing test results. Example would be iperf's self-test which tests for a nornir result not hitting the required bandwidth.
What can possibly go wrong:
NutsResult.validate()
methodThere are currently no tests that check what happens if:
This especially concerns the function "load_module" in yaml_to_test.py. The current tests only cover the sunny cases.