When running inspec tests, the results show the number of passed, skipped, and failed tests. At least from my perspective, I view failures as something that should never happen (ie, a build should not be allowed to proceed if a test fails) and skipped tests should be reviewed. For example, accepted risks might use skips (with a comment pointing to some documentation about the accepted risk, mitigations, etc). Or an unimplemented test might use a skip to indicate that this control exists, but we don't know if it is passing or failing.
On the other hand, I notice a lot of controls here use skip with controls that say something like "If X is used, then it must be properly configured", in the case where "X" is not used. (Eg. 2.2.1.3.)
What are your thoughts on changing controls like 2.2.1.3 to pass when the package they are concerned with is not in use, and adding skips to unimplemented controls (eg 3.6)?
When running inspec tests, the results show the number of passed, skipped, and failed tests. At least from my perspective, I view failures as something that should never happen (ie, a build should not be allowed to proceed if a test fails) and skipped tests should be reviewed. For example, accepted risks might use skips (with a comment pointing to some documentation about the accepted risk, mitigations, etc). Or an unimplemented test might use a skip to indicate that this control exists, but we don't know if it is passing or failing.
On the other hand, I notice a lot of controls here use skip with controls that say something like "If X is used, then it must be properly configured", in the case where "X" is not used. (Eg. 2.2.1.3.)
What are your thoughts on changing controls like 2.2.1.3 to
pass
when the package they are concerned with is not in use, and addingskips
to unimplemented controls (eg 3.6)?