Open nameduser0 opened 2 years ago
Cannot reproduce, works fine here. Could you post the output you get when running the management/status_checks.py
script via SSH?
Not sure how it does the external checks but most of these aren't true: I'll do some more digging when I get a chance.
Partial output (there's a lot! - sanitised):
System
✖ Public DNS (nsd4) is not running (port 53). ✖ Incoming Mail (SMTP/postfix) is running but is not publicly accessible at IP_HERE:25. ✖ Outgoing Mail (SMTP 465/postfix) is running but is not publicly accessible at IP_HERE:465. ✖ Outgoing Mail (SMTP 587/postfix) is running but is not publicly accessible at IP_HERE:587. ✖ IMAPS (dovecot) is running but is not publicly accessible at IP_HERE:993. ✖ Mail Filters (Sieve/dovecot) is running but is not publicly accessible at IP_HERE:4190. ✖ HTTP Web (nginx) is running but is not publicly accessible at IP_HERE:80. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful ✖ HTTPS Web (nginx) is running but is not publicly accessible at IP_HERE:443. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful ✖ The SSH server on this machine permits password-based login. A more secure way to log in is using a public key. Add your SSH public key to $HOME/.ssh/authorized_keys, check that you can log in without a password, set the option 'PasswordAuthentication no' in /etc/ssh/sshd_config, and then restart the openssh via 'sudo service ssh restart'. ✖ There are 1 software packages that can be updated. tzdata (2021a-1+deb11u4)
- You are running version Mail-in-a-Box v56.4. Mail-in-a-Box version check disabled by privacy setting. ✓ System administrator address exists as a mail alias. [administrator@HOST.DOMAIN.co.uk ↦ richard@DOMAIN.co.uk] ✓ The disk has 2.12 GB space remaining. ✓ System memory is 32% free.
PGP Keyring
✓ The daemon's key (B1E2876F711DBEF81EBBC13BC5F) is good. It expires in 175 days on 2022-12-07.
- There are no imported keys here.
Network
? The ufw program was not installed. If your system is able to run iptables, rerun the setup. ✓ Outbound mail (SMTP port 25) is not blocked. ✓ IP address is not blacklisted by zen.spamhaus.org.
- No SMTP relay has been set up.
Syslog output:
Jun 15 04:02:14 miab start[66244]: 127.0.0.1 - - [15/Jun/2022 04:02:14] "GET /system/privacy?=1655262109789 HTTP/1.0" 200 - Jun 15 04:02:14 miab start[66244]: 127.0.0.1 - - [15/Jun/2022 04:02:14] "GET /system/reboot?=1655262109790 HTTP/1.0" 200 - Jun 15 04:02:14 miab postfix/smtpd[461885]: connect from localhost[127.0.0.1] Jun 15 04:02:14 miab postfix/submission/smtpd[461886]: connect from localhost[127.0.0.1] Jun 15 04:02:14 miab postfix/submission/smtpd[461887]: connect from localhost[127.0.0.1] Jun 15 04:02:14 miab postfix/submission/smtpd[461887]: lost connection after CONNECT from localhost[127.0.0.1] Jun 15 04:02:14 miab postfix/submission/smtpd[461887]: disconnect from localhost[127.0.0.1] commands=0/0 Jun 15 04:02:14 miab postfix/submission/smtpd[461886]: SSL_accept error from localhost[127.0.0.1]: lost connection Jun 15 04:02:14 miab postfix/submission/smtpd[461886]: lost connection after CONNECT from localhost[127.0.0.1] Jun 15 04:02:14 miab postfix/submission/smtpd[461886]: disconnect from localhost[127.0.0.1] commands=0/0 Jun 15 04:02:14 miab opendmarc[65447]: ignoring connection from localhost Jun 15 04:02:14 miab postfix/smtpd[461885]: lost connection after CONNECT from localhost[127.0.0.1] Jun 15 04:02:14 miab postfix/smtpd[461885]: disconnect from localhost[127.0.0.1] commands=0/0 Jun 15 04:02:14 miab named[64745]: received control channel command 'flush' Jun 15 04:02:14 miab named[64745]: flushing caches in all views succeeded Jun 15 04:02:15 miab named[64745]: resolver priming query complete Jun 15 04:02:22 miab start[66244]: 127.0.0.1 - - [15/Jun/2022 04:02:22] "POST /system/status HTTP/1.0" 200 -
Think I might have found the problem possibly. I'm getting regular emails with this subject:
Status Checks Change Notice
Missing privilege separation directory: /run/sshd Traceback (most recent call last): File "/root/power-mailinabox/management/status_checks.py", line 1643, in
run_and_output_changes(env, pool) File "/root/power-mailinabox/management/status_checks.py", line 1464, in run_and_output_changes run_checks(True, env, cur, pool) File "/root/power-mailinabox/management/status_checks.py", line 130, in run_checks if not run_services_checks(env, output, pool): File "/root/power-mailinabox/management/status_checks.py", line 179, in run_services_checks for i, service in enumerate(get_services())), File "/root/power-mailinabox/management/status_checks.py", line 78, in get_services "port": get_ssh_port(), File "/root/power-mailinabox/management/status_checks.py", line 157, in get_ssh_port output = shell('check_output', ['sshd', '-T']) File "/root/power-mailinabox/management/utils.py", line 147, in shell ret = getattr(subprocess, method)(cmd_args, *kwargs) File "/usr/lib/python3.9/subprocess.py", line 424, in check_output return run(popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['sshd', '-T']' returned non-zero exit status 255.
This error looks a lot like it's specific to the machine it runs on. For example, it runs fine on my production machine.
Specifically, I've encountered this issue when testing #53 (aka your PR to add support to LXC containers)
Missing privilege separation directory: /run/sshd
Sounds like a permissions and not a hardware issue to me. Have you tried your latest release on a fresh Debian 11 VM? I've only tested on a fresh installed container.
I have seen this before and seem to remember it had something to do with sshd running in socket, rather than service mode
Okay so looks like I found the issue. Here's what I think happened:
Running in a container:
sshd -T
failed causing the system status command to fail through the admin page (HTTP 500)sshd -T
started working again and the system status page also started workingsshd[1554224]: fatal: Missing privilege separation directory: /run/sshd
root@miab:/var/log# sshd -T Missing privilege separation directory: /run/sshd root@miab:/var/log# service sshd status ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: inactive (dead) Docs: man:sshd(8) man:sshd_config(5) root@miab:/var/log# ps aux | grep ssh root 1550309 0.0 1.1 14456 8736 ? Ss 20:22 0:00 sshd: root@pts/2 root 1553818 0.0 0.0 6184 648 pts/2 S+ 21:10 0:00 grep ssh root@miab:/var/log# cat /proc/1550309/cmdline | xargs -0 echo sshd: root@pts/2 root@miab:/var/log# service sshd restart root@miab:/var/log# service sshd status ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2022-06-25 21:11:40 BST; 3s ago Docs: man:sshd(8) man:sshd_config(5) Process: 1553831 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 1553832 (sshd) Tasks: 1 (limit: 9426) Memory: 1.0M CPU: 30ms CGroup: /system.slice/ssh.service └─1553832 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
Jun 25 21:11:40 miab.casesolved.co.uk systemd[1]: Starting OpenBSD Secure Shell server... Jun 25 21:11:40 miab.casesolved.co.uk sshd[1553832]: Server listening on 0.0.0.0 port 22. Jun 25 21:11:40 miab.casesolved.co.uk sshd[1553832]: Server listening on :: port 22. Jun 25 21:11:40 miab.casesolved.co.uk systemd[1]: Started OpenBSD Secure Shell server. root@miab:/var/log# sshd -T port 22 addressfamily any listenaddress [::]:22 truncated ...
root@miab:/var/log# ps aux | grep ssh root 1550309 0.0 1.0 14456 8384 ? Ss 20:22 0:00 sshd: root@pts/2 root 1553832 0.0 0.8 13292 6888 ? Ss 21:11 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups root 1553893 0.0 0.0 6184 716 pts/2 S+ 21:14 0:00 grep ssh
I get the below error when I go to the Status Checks page.
Is due to a 500 HTTP code from this URL:
https://<hostname>/admin/system/status