Open Archana-PandeyM opened 3 weeks ago
While the idea is good, I think this ought to be done rather in pytest-client-tools:
- the fixture can run potentially after fixtures, so missing commands
- pytest-client-tools, as it is a pytest plugin, knows already about the execution time of the test, and thus it is possible to properly restrict the lookup of the SELinux denials for each test specifically (see the checkpoints in ausearch)
- pytest-client-tools also already has logs for each test, and the log with SELinux denials would be an additional one
Thanks for feedback, I agree that we should put it in pytest_client_tools. Eventually we would add such checks in rhc repo as well. So would you suggest keeping this fixture in pytest_client_tools/plugin.py ?
This will make all test fail even if some other RHEL package has SELinux misconfigured and gets run. Which I guess is good, but it could poison our results for some time until it gets fixed.
Would it make sense to only check lines that might be related to our tools (e.g. insights-client, gpg, python, ...?)
Yes I planned to do so.
I agree that we should put it in pytest_client_tools. Eventually we would add such checks in rhc repo as well. So would you suggest keeping this fixture in pytest_client_tools/plugin.py ?
Not as a fixture, no. There are already hooks that track certain parts of the tests execution flow, and the SELinux checks would need to be added there:
In pytest-client-tools there are already the bits to handle per-test stuff, so this should not be complicated to add.
I agree that we should put it in pytest_client_tools. Eventually we would add such checks in rhc repo as well. So would you suggest keeping this fixture in pytest_client_tools/plugin.py ?
Not as a fixture, no. There are already hooks that track certain parts of the tests execution flow, and the SELinux checks would need to be added there:
- before starting the test, a checkpoint needs to be created with ausearch
- when a test finishes, get the list of denials between the checkpoint and the test completion, logging the result
In pytest-client-tools there are already the bits to handle per-test stuff, so this should not be complicated to add.
Would you suggest me to implement this in pytest_client_tools, Can you help me point to the bits that handle per-test stuff because I tried but could not figure out.
Would you suggest me to implement this in pytest_client_tools, Can you help me point to the bits that handle per-test stuff because I tried but could not figure out.
Sure.
Start from the current code in plugin.py
, in particular the hooks (see upstream documentation here [1]):
pytest_runtest_protocol
is used when running a test; in pytest_client_tools a new per-text item, the internal class NodeRunningData
(part of the util
module): that class holds few things, including the log of the testpytest_runtest_logfinish
is used when a test ends; in pytest_client_tools this means that the item is dropped, after handling the bits of it -- e.g. closing the log file and archiving it in the artifacts of the test (see the internal ArtifactsCollector
class)So in practice I think that:
NodeRunningData
class should create a new checkpoint in ausearch
, to mark the time when the test start (so denials before that point are ignored), and keep a reference to itpytest_runtest_logfinish
, ausearch
should extract the denials before the checkpoint, save the output in a file (e.g. selinux.log
, audit.log
, something like that) and archive thatselinuxenabled
executable)[1] https://docs.pytest.org/en/stable/reference/reference.html#hooks
While the idea is good, I think this ought to be done rather in pytest-client-tools: