I've run into cases where tests are known to fail in certain circumstances but shouldn't be counted against the test run on the whole. For example:
A charm that requires customer-specific authentication (by env variable, config, etc.) to enable functionality.
A charm is known not to work on certain providers, i.e., local/lxd
So, in addition to PASS and FAIL I propose we also enable a test to emit SKIP in events where a test cannot or should not run in order for automated tests to accurately reflect the tested state of the charm (like cloud-weather-report).
I've run into cases where tests are known to fail in certain circumstances but shouldn't be counted against the test run on the whole. For example:
So, in addition to PASS and FAIL I propose we also enable a test to emit SKIP in events where a test cannot or should not run in order for automated tests to accurately reflect the tested state of the charm (like cloud-weather-report).