There is a corner case where an Ansible playbook causes an error in /hardening/host-os/ansible and because we let the test reboot + scan anyway (like other Ansible tests), the errored_count and failed_count in lib/results.py get re-set to 0 anytime the test is started after a host reboot.
This is not an issue for VM-using tests because the test itself doesn't exit, it retains its variables, whereas host-os test is re-started by TMT.
A reasonable fix would be to add all_count (in addition to the above) and if it's 0, try to find an existing results.yaml, read it through PyYAML and fill-in errored_count and failed_count. That should make the test itself report error too.
Note that this has theoretically wider implications even for non-error cases; if a test reports a fail, reboots, and then doesn't report anything or just pass / info / skip, the test itself will pass, because it didn't see the fail after a reboot.
There is a corner case where an Ansible playbook causes an error in
/hardening/host-os/ansible
and because we let the test reboot + scan anyway (like other Ansible tests), theerrored_count
andfailed_count
inlib/results.py
get re-set to0
anytime the test is started after a host reboot.This is not an issue for VM-using tests because the test itself doesn't exit, it retains its variables, whereas
host-os
test is re-started by TMT.A reasonable fix would be to add
all_count
(in addition to the above) and if it's0
, try to find an existingresults.yaml
, read it through PyYAML and fill-inerrored_count
andfailed_count
. That should make the test itself reporterror
too.Note that this has theoretically wider implications even for non-error cases; if a test reports a
fail
, reboots, and then doesn't report anything or justpass
/info
/skip
, the test itself will pass, because it didn't see thefail
after a reboot.