Closed AErmie closed 2 years ago
Thanks for the idea, sounds reasonable. The logging issue is due to a broken image, see the mentioned issue.
Please explain what kind of logic you would like around score metrics. The problem with the scoring system is that there's no definition of what a good score is.
That's a good point @konstruktoid, that this is no definition of a "good" score. I would start with what the maximum score (per group) could be. For example, the container_images
checks (if I'm calculating it correctly), could have a maximum score of 11 (as there are 11 checks). For the container_runtime
, it could be 31.
If all note
and info
count as zero (0), pass
is +1, and warn
is -1, it would actually be helpful to have counts by type, like this:
This way, we could do some calculations such as MAX (which is 31 in this example) - PASS (which is 18 in this example) = 13, WARN (which is 9 in this example) / 13 (the result of MAX - PASS) = 69.23% (as a percentage). Then we could decide if % WARN > 50%, exit 1 (ie. fail/break the build pipeline).
With this approach, you don't have to define the scoring metrics. As long as all the elements are exposed/addressable in output, we can use the values for our own scoring requirements.
This could be achieved by parsing the json log. Example crude one-liner:
$ MAX=$(grep -c '"result":' log/docker-bench-security.log.json) PASS=$(grep -c '"result": "PASS"' log/docker-bench-security.log.json) WARN=$(grep -c '"result": "WARN"' log/docker-bench-security.log.json); echo "(${WARN}/(${MAX}-${PASS})*100)" | bc -l
46.87500000000000000000
Closing due to inactivity.
When running multiple groups of checks...
sh docker-bench-security.sh -c container_images,container_runtime
... output the number of checks/score by the group as well as combined/total (which is the current output). This will allow us to capture the individual group's score (provided the logging issue gets resolved!), and include logic around score metrics to fail the CI/CD pipeline.