Open stejoo opened 1 year ago
I commented on the PR you submitted. The issue is on your target (or the ansible/module version you're using not supporting it correctly).
Sorry for leaving this be. My fork has been working great for us internally. I welcome any suggestions what to try next/instead.
Auditd does not want to be restarted by systemctl
. It reports back as such:
# systemctl restart auditd.service
Failed to restart auditd.service: Operation refused, unit auditd.service may be requested by dependency only (it is configured to refuse manual start/stop).
See system logs and 'systemctl status auditd.service' for details.
It's documentation is also clear on why this is and how you should restart it using # /sbin/service auditd restart
.
Current ansible-lockdown code for RHEL9-CIS (as do 8 and 7) provides the same solution: https://github.com/ansible-lockdown/RHEL9-CIS/blob/4fb533bcbe8a253d3e8dd13117641f1017e4de56/handlers/main.yml#L104
I believe I have described that issue in some detail as well. I have reviewed the Python code of the service
module and have shown what command the module generates and that those will not work for this purpose: it will never restart auditd
properly (on RHEL8 and probably other systems as well).
I am open to your ideas.
Sorry for leaving this be. My fork has been working great for us internally. I welcome any suggestions what to try next/instead.
Auditd does not want to be restarted by
systemctl
. It reports back as such:# systemctl restart auditd.service Failed to restart auditd.service: Operation refused, unit auditd.service may be requested by dependency only (it is configured to refuse manual start/stop). See system logs and 'systemctl status auditd.service' for details.
It's documentation is also clear on why this is and how you should restart it using
# /sbin/service auditd restart
.Current ansible-lockdown code for RHEL9-CIS (as do 8 and 7) provides the same solution: https://github.com/ansible-lockdown/RHEL9-CIS/blob/4fb533bcbe8a253d3e8dd13117641f1017e4de56/handlers/main.yml#L104
I believe I have described that issue in some detail as well. I have reviewed the Python code of the
service
module and have shown what command the module generates and that those will not work for this purpose: it will never restartauditd
properly (on RHEL8 and probably other systems as well).I am open to your ideas.
The commands used for start,stop,reload can be mapped at the unit service file using ExecStart, ExecStop, ExecReload, etc..
https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html
Describe the bug
The handler that is supposed to restart
auditd
does not work. Theauditd
service is not restarted.We added
auditd_log_group: splunk
setting and ran the playbook to implement the configuration change. The role ran without problems. Including the run of the handler at the end (restart auditd
) which returns achanged
status, as expected. However, the configuration change did not take effect. The log file was still owned byroot:root
. Looking at the service status showed theauditd
daemon was not restarted.Playbook
Output
Relevant output of the playbook run:
However, the file on the system was still owned by the (default)
root
user:Further inquiry showed the
auditd
daemon was not restarted on the system. Performing the restart manually, using theservice
command instead ofsystemctl
(as specified by https://access.redhat.com/solutions/2664811), resulted in theauditd
daemon being properly restarted and the expected change to the log file's group ownership made:Finding (and TL;DR): The role does not restart the
auditd
daemon properly.Debugging the issue
The role's
restart auditd
handler (at tag3.2.4
) looks like this:The
use
argument gives one the impression Ansible might use theservice
command here. It is apparent that is not the case. Reading the documentation one sees this option in fact influences the module choice Ansible makes, becauseansible.builtin.service
is a abstraction module (likepackage
). Instead of choosing a module based on Ansible factansible_service_mgr
theuse:
option forces Ansible to choose a specific module. Setting it toservice
makes Ansible call the, and I quote, "'old" service module'.Whatever goes on behind the scenes: the end result does not have the desired effect:
auditd
is not restarted. My theory at this point was: Ansible is still callingsystemctl restart auditd
; which is incorrect for this service. Theauditd
daemon demands to be restarted withservice auditd restart
by design (more info at: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-starting_the_audit_service). To test my theory ofsystemctl
is being called I decided to debug the module on the node side of the affair. I commanded Ansible to keep the (temporary) generated Python code on the node with:After "exploding" the resulting AnsiballZ file on the node I had a look at the module code. To test my theory I added some debug code to the resulting
debug_dir/ansible/modules/service.py
module to have it display the commands it executed. This resulted in:The
use:
option in handler task does influence the module chosen by Ansible as expected. In this case it tells Ansible to use theansible.legacy.service
module instead ofansible.builtin.systemd
. Looking at the code of this module: theansible.legacy.service
performs it's own discovery of the init system on the node. If it findssystemctl
on the host it decides to use that. As a last resort, if it doesn't find any supported init system it falls back to using theservice
command. In my/our case it ends up callingsystemctl
, which results inauditd
not restarting... :disappointed:Ansible bug report 22171 describes this issue/behavior in some more depth. A comment there recommends calling
service
directly with thecommand
module:Perhaps the role should do something like that. The exact path might be an issue, maybe call it with a relative path and rely on it being in
$PATH
? Or figure out if the supported distributions by this role share a common absolute path toservice
.I am somewhat surprised to have hit this bug. Does
auditd
restart properly for others using this role? Or did theuse: service
have the desired effect on other systems or other (older?) Ansible versions? Or was this somehow not noticed? :innocent:Environment