Closed polachz closed 5 years ago
Hi @polachz,
I know that the service for the manager is not working in 3.10
(it will be fixed in 3.11
) and you need to start Wazuh with /var/ossec/bin/ossec-control start
command, but I couldn't replicate your error in a virtual machine with CentOS 8
. I can get the status of the Wazuh daemons trough the API request.
It seems that your issue is due to a permissions error. By default, the Wazuh API is executed by ossec
user.
When you run the Wazuh API in foreground, you are executing it with the user who launched it (I suppose that this user is root
). You can do that through the API service if you edit your /var/ossec/api/configuration/config.js
by setting to false
the option config.drop_privileges
.
For curl -u foo:bar "http://localhost:55000/manager/status"
request, the Wazuh framework checks if Wazuh processes are running. It is possible that your /proc
directory hasn't got read and execution permissions for others
(please, execute ls -lhrt / | grep "proc"
for checking it)?
Best regards,
Demetrio.
Hi,
Thank you for the reply. When i disable drop privileges, things run smoothly now and API returns to me correct states. So problems is sure related to ossec user rights.
my proc rights are here:
ls -lhrt / | grep "proc" dr-xr-xr-x. 129 root root 0 Oct 27 13:16 proc
Then seems that read and execute rigths are set correct.
I have to notice that SeLinux is enabled on the server...
Additionally, another user from mailing (Daniel Melgarejo )list was able to reproduce the problem too...
Do you need any other check? I'm not happy to leave run anything as root on my servers if not absolutely necessary..
Hi @polachz,
I checked that Daniel Melgarejo is getting a right response (he can obtain the status of the Wazuh daemons):
If you want to debug this error, please, execute this command and paste the output:
# sudo -u ossec /var/ossec/framework/python/bin/python3 -c 'from wazuh.manager import status; print(status())'
Furthermore, I want to know the output of these commands: ls -lhtr /proc/ | grep 'ossec'
and ls -lhrt /var/ossec/var/run
.
Best regards,
Demetrio.
sudo -u ossec /var/ossec/framework/python/bin/python3 -c 'from wazuh.manager import status; print(status())' [sudo] password for user: {'ossec-agentlessd': 'stopped', 'ossec-analysisd': 'failed', 'ossec-authd': 'failed', 'ossec-csyslogd': 'stopped', 'ossec-dbd': 'stopped', 'ossec-monitord': 'failed', 'ossec-execd': 'failed', 'ossec-integratord': 'stopped', 'ossec-logcollector': 'failed', 'ossec-maild': 'stopped', 'ossec-remoted': 'failed', 'ossec-reportd': 'stopped', 'ossec-syscheckd': 'failed', 'wazuh-clusterd': 'stopped', 'wazuh-modulesd': 'failed', 'wazuh-db': 'failed'}
running as standard user ls -lhtr /proc/ | grep 'ossec' empty response
running as ossec:
sudo -u ossec ls -lhtr /proc/ | grep 'ossec' dr-xr-xr-x. 9 ossec ossec 0 Oct 29 15:17 18976
--running as root:
ls -lhtr /proc/ | grep 'ossec' dr-xr-xr-x. 9 ossec ossec 0 Oct 27 13:21 2736 dr-xr-xr-x. 9 root ossec 0 Oct 27 13:21 2729 dr-xr-xr-x. 9 ossec ossec 0 Oct 27 13:21 2764 dr-xr-xr-x. 9 root ossec 0 Oct 27 13:21 2756 dr-xr-xr-x. 9 root ossec 0 Oct 27 13:21 2777 dr-xr-xr-x. 9 ossecr ossec 0 Oct 27 13:21 2776 dr-xr-xr-x. 9 root ossec 0 Oct 27 13:21 2796 dr-xr-xr-x. 9 ossec ossec 0 Oct 27 13:21 2790 dr-xr-xr-x. 9 root ossec 0 Oct 27 13:21 2784
Hope this helps
Ok @polachz,
Please, paste the result of ls -lhrt /var/ossec/var/run
command (as root
).
Seems that it's readable for root and ossec group :(
ls -lhrt /var/ossec/var/run total 44K -rw-r-----. 1 ossec ossec 5 Oct 27 13:21 wazuh-db-2736.pid -rw-r-----. 1 root ossec 5 Oct 27 13:21 ossec-authd-2729.pid -rw-r-----. 1 root ossec 5 Oct 27 13:21 ossec-execd-2756.pid -rw-r-----. 1 ossecr ossec 5 Oct 27 13:21 ossec-remoted-2776.pid -rw-r-----. 1 root ossec 5 Oct 27 13:21 ossec-syscheckd-2777.pid -rw-r-----. 1 root ossec 5 Oct 27 13:21 ossec-logcollector-2784.pid -rw-r-----. 1 ossec ossec 5 Oct 27 13:21 ossec-monitord-2790.pid -rw-r-----. 1 root ossec 5 Oct 27 13:21 wazuh-modulesd-2796.pid -rw-r-----. 1 ossec ossec 5 Oct 27 13:21 ossec-analysisd-2764.pid -rw-r--r--. 1 ossecr ossec 461 Oct 29 16:06 ossec-remoted.state -rw-r-----. 1 ossec ossec 2.2K Oct 29 16:06 ossec-analysisd.state
/etc/gtoup contains a line:
ossec:x:991:ossec,ossecr,ossecm
/etc/passwd :
ossec:x:994:991::/var/ossec:/sbin/nologin
ossecr:x:993:991::/var/ossec:/sbin/nologin
ossecm:x:992:991::/var/ossec:/sbin/nologin
Regarding your results, the ossec
user should get the same result as the root
user after executing the ls -lhtr /proc/ | grep 'ossec'
command (your permissions for /proc
directory are dr-xr-xr-x. 129 root root 0 Oct 27 13:16 proc
, and this is OK).
The function that the API execute for the call manager/status
is the next:
https://github.com/wazuh/wazuh/blob/3.10/framework/wazuh/cluster/utils.py#L91-L119
The function needs to list the content of the /proc
directory, and when you did with sudo -u ossec ls -lhtr /proc/ | grep 'ossec'
command you couldn't get the expected result, and this causes that you get that some Wauzuh daemons have failed
status.
In my environment, /proc
directory is mounted as:
# mount | grep "/proc"
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
selinux
is enabled:
# getenforce
Enforcing
# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
But I can get the status of the Wazuh daemons. As a temporary solution, I suggest to use the API with config.drop_privileges
set to false
until you cannot list the content of /proc
directory with the ossec
user.
Best regards,
Demetrio.
Ok, after your explanation and point to code, i found the reason. I have used hidepid option for proc mount. When removed, everything works smoothly.
Going to use gid option here for ossec group to preserve hardening and allow ossec working.
Thank you for the help, Maybe check of this option during wazuh install can help in the future.... This hidepid option start to be very often recommended in various hardening guides..
OK @polachz, thank you for your time. The community feedback is very useful for us and it helps to improve Wazuh. We are going to consider giving support to hidepid
option for /proc
directory.
Hi. I'm trying to get wazuh working on CentOS8 server for some tests, but I'm getting response that Wazuh daemons don't run through the api when it's started as a service:
curl -u foo:bar "http://localhost:55000/manager/status" {"error":0,"data":{"ossec-agentlessd":"stopped","ossec-analysisd":"failed","ossec-authd":"failed","ossec-csyslogd":"stopped","ossec-dbd":"stopped","ossec-monitord":"failed","ossec-execd":"failed","ossec-integratord":"stopped","ossec-logcollector":"failed","ossec-maild":"stopped","ossec-remoted":"failed","ossec-reportd":"stopped","ossec-syscheckd":"failed","wazuh-clusterd":"stopped","wazuh-modulesd":"failed","wazuh-db":"failed"}}
~$ systemctl status wazuh-api ● wazuh-api.service - Wazuh API daemon Loaded: loaded (/etc/systemd/system/wazuh-api.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2019-10-27 13:40:53 CET; 37s ago Docs: https://documentation.wazuh.com/current/user-manual/api/index.html Process: 4299 ExecStop=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS) Main PID: 4398 Tasks: 11 (limit: 2104) Memory: 21.3M CGroup: /system.slice/wazuh-api.service └─4398 /bin/node /var/ossec/api/app.js
Oct 27 13:40:53 xxxxxx systemd[1]: Started Wazuh API daemon.
But when I run the API in the foregroud directly:
node /var/ossec/api/app.js -f
then api returns correct state that daemons are running.
curl -u foo:bar "http://localhost:55000/manager/status" {"error":0,"data":{"ossec-agentlessd":"stopped","ossec-analysisd":"running","ossec-authd":"running","ossec-csyslogd":"stopped","ossec-dbd":"stopped","ossec-monitord":"running","ossec-execd":"running","ossec-integratord":"stopped","ossec-logcollector":"running","ossec-maild":"stopped","ossec-remoted":"running","ossec-reportd":"stopped","ossec-syscheckd":"running","wazuh-clusterd":"stopped","wazuh-modulesd":"running","wazuh-db":"running"}}
Due this Wazuh Kibana app doesn't work without API running in the foreground.
Of course, the manager is running, I dind't provide any change in wazuh-manager service during switch API from dervice to foreground process.
api.log doesn't contain any error (severity is changed to debug in api config file)
If you need more pieces of information, please let me know