jokob-sk / NetAlertX

💻🔍 WIFI / LAN intruder detector. Scans for devices connected to your network and alerts you if new and unknown devices are found.
GNU General Public License v3.0
1.93k stars 97 forks source link

[pemission/firewall issue] No web interface #704

Open jokob-sk opened 3 weeks ago

jokob-sk commented 3 weeks ago
          Thank you for a wonderful application. 

I face this issue on a fresh install (no files copied over). The Web interface does not load, I just get a blank page. I have attached the docker cli command I use and the logs, if anyone can point me in the right direction that would be great.

docker@netalert [ ~ ]# docker run -d --name=netalert --network=host -e PORT=20211 -v /docker/pialert/config:/app/config -v /docker/pialert/db:/app/db -e TZ=Europe/Berlin -e ALWAYS_FRESH_INSTALL --restart=unless-stopped jokobsk/netalertx:24.6.8

docker logs netalert.txt

Originally posted by @joel72265 in https://github.com/jokob-sk/NetAlertX/issues/674#issuecomment-2157539168

jokob-sk commented 3 weeks ago

@joel72265 let's continue here:

history:

Thank you for a wonderful application.

I face this issue on a fresh install (no files copied over). The Web interface does not load, I just get a blank page. I have attached the docker cli command I use and the logs, if anyone can point me in the right direction that would be great.

docker@netalert [ ~ ]# docker run -d --name=netalert --network=host -e PORT=20211 -v /docker/pialert/config:/app/config -v /docker/pialert/db:/app/db -e TZ=Europe/Berlin -e ALWAYS_FRESH_INSTALL --restart=unless-stopped jokobsk/netalertx:24.6.8

docker logs netalert.txt

Please try this command (make sure local/path is a valid location on your server, ALWAYS_FRESH_INSTALL will always delete your data, don't use it).

docker run -d --rm --network=host \
  -v local/path/config:/app/config \
  -v local/path/db:/app/db \
  -e TZ=Europe/Berlin \
  -e PORT=20211 \
  jokobsk/netalertx:latest

If above doesn't work, please post browser console errors and check if nginx is running in the container.

Thank you for the reply.

I used the -e variable 'ALWAYS_FRESH_INSTALL' before to cleanup my previous install but now did a clean install and ran the command without this variable,

1) the install path is valid 2) nginx is running (screenshot attached) 3) the browser console does not show any errors (Chrome > Developer tools) 4) logs attached

netalertx-01 docker logs netalertx-11June.txt

Thanks for the screenshot @joel72265 . Are you running another instance of netalertx (or another application) on your network on the port 20211? If yes, that can prevent NGINX from starting. You can try to use a different port to see if that helps.

You can also try these steps: https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEB_UI_PORT_DEBUG.md

netalertx is the only container running on the box with its unique IP :(

I should've thought of trying another port myself but still nothing (screenshot attached), I will keep testing at my end netalertx-02

Something I should have mentioned, the web page did not load with v24.5.9 either but the page did load with v24.3.19 (but here the old db and config paths were used)

jokob-sk commented 3 weeks ago

@joel72265

Could you please post the output of these commands/try the following:

  1. On the server running docker: sudo docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
  2. In the container: sudo lsof -i
  3. Try to start the application without mounting any volumes, if it starts this will indicate a file permissions issue
  4. Remove the container and the image and redownload the latest image from start

If the above doesn't help post the commands you tried to run the docker image

dvystrcil commented 3 weeks ago

I am running into this same issue. The browser presents a white page.

In my case I am running the container in my kubernetes homelab, so I am kinda on my own here.

jokob-sk commented 3 weeks ago

Hi there,

Can you please provide debug information based on this guide so that I can try to help? https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEB_UI_PORT_DEBUG.md

dvystrcil commented 2 weeks ago

I am still poking at it, so far nothing interesting has come up from walking through all the steps. I did find that it is serving up a page, but it literally is blank.

netalertx-7f4954bb84-p8wrs:/# curl localhost:20211
<!-- NetAlertX CSS -->
<link rel="stylesheet" href="css/app.css">

Here is a small snip from my logs. I have restarted a few times but in between I am seeing that it finds new devices and such. I should have saved out the more interesting logs.

23:50:48 [Plugin utils] display_name: Internet-Check
23:50:48 [Plugins] Executing: python3 /app/front/plugins/internet_ip/script.py prev_ip={prev_ip} INTRNT_DIG_GET_IP_ARG={INTRNT_DIG_GET_IP_ARG}
23:50:49 [Plugins] SUCCESS, received 1 entries
23:50:49 [API] Updating table_appevents.json file in /front/api
23:50:49 [API] Updating table_plugins_history.json file in /front/api
23:50:49 [Process Scan]  Processing scan results
23:50:49 [Process Scan] Print Stats
23:50:49 [Scan Stats] Devices Detected.......: 3
23:50:49 [Scan Stats] New Devices............: 0
23:50:49 [Scan Stats] Down Alerts............: 0
23:50:49 [Scan Stats] New Down Alerts........: 0
23:50:49 [Scan Stats] New Connections........: 8
23:50:49 [Scan Stats] Disconnections.........: 0
23:50:49 [Scan Stats] IP Changes.............: 0
23:50:49 [Scan Stats] Scan Method Statistics:
23:50:49     INTRNT: 1
23:50:49     arp-scan: 1
23:50:49     local_MAC: 1
23:50:49 [Process Scan] Stats end
23:50:49 [Process Scan] Sessions Events (connect / discconnect)
23:50:49 [Process Scan] Creating new devices
23:50:49 [Process Scan] Updating Devices Info
23:50:50 [Process Scan] Voiding false (ghost) disconnections
23:50:50 [Process Scan] Pairing session events (connection / disconnection)
23:50:50 [Process Scan] Creating sessions snapshot
23:50:50 [Process Scan] Inserting scan results into Online_History
23:50:50 [Process Scan] Skipping repeated notifications
23:50:50 [Skip Repeated Notifications] Skip Repeated
23:50:50 [Plugin utils] ---------------------------------------------
23:50:50 [Plugin utils] display_name: NSLOOKUP (Name discovery)
23:50:50 [Plugins] Executing: python3 /app/front/plugins/nslookup_scan/nslookup.py
23:50:50 [Plugins] No output received from the plugin NSLOOKUP - enable LOG_LEVEL=debug and check logs
23:50:50 [Notification] Check if something to report
23:50:50 [Notification] Included sections: ['new_devices', 'down_devices', 'events']
23:50:50 [Notification] No changes to report

Small update. If I navigate to http://localhost:20211/img/NetAlertX_logo.png I see the image. I am wondering if PHP is running properly.

jokob-sk commented 2 weeks ago

Thanks for that @dvystrcil , the backend seems to be running fine.

Make sure NGINX is running

Have you checked if NGINX is running in the container?

Check if there are conflicting ports

Execute the following in the container to see the processes and their ports and submit a screenshot of the result:

sudo apt-get install lsof sudo lsof -i

Try running the nginx command in the container

if you get `nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use)` try using a different port number

Check for browser console errors + check different browsers

Also, check if there are any F12 browser dev console errors.

Disable proxy

If you have any reverse proxy or similar, try disabling it.

Post your docker start details

Also, can you post your docker compose/run command?

Check f your PHP/NGINX error log

NGINX: /var/log/nginx/error.log PHP: /app/front/log/app.php_errors.log

dvystrcil commented 2 weeks ago

Just for your notes, apt would be for Debian based images. I see you have Ddockerfile.Debian for that, but the container I have running is Alpine so the commands to add new packages are different.

Anyway, the port is not blocked as in my example above localhost:20211 does produce an empty page with just:

<!-- NetAlertX CSS -->
<link rel="stylesheet" href="css/app.css">

This is consistent with your index.php in the /app/front folder

netalertx-7f4954bb84-p8wrs:/app/front# cat index.php 
<!-- NetAlertX CSS -->
<link rel="stylesheet" href="css/app.css">

<?php
require dirname(__FILE__).'/php/server/init.php';
require 'php/templates/security.php';

$CookieSaveLoginName = 'NetAlertX_SaveLogin';
...

The error.log does have some action happening. Here is a snip, I can provide the rest if you find it of use:

netalertx-7f4954bb84-p8wrs:/app/front# cat /var/log/nginx/error.log
2024/06/14 23:40:42 [error] 151#151: *1 FastCGI sent in stderr: "PHP message: PHP Warning:  file(/app/front/php/templates/../../../config/app.conf): Failed to open stream: Permission denied in /app/front/php/templates/timezone.php on line 15; PHP message: PHP Fatal error:  Uncaught TypeError: preg_grep(): Argument #2 ($array) must be of type array, false given in /app/front/php/templates/timezone.php:16
Stack trace:
#0 /app/front/php/templates/timezone.php(16): preg_grep()
#1 /app/front/php/server/init.php(3): require('...')
#2 /app/front/index.php(5): require('...')

/app/config is currently owned by nobody:nobody

netalertx-7f4954bb84-p8wrs:/app/config# ls -la
total 4089
drwxr-x---    2 nobody   nobody          20 Jun 14 23:39 .
drwxr-x---    1 nginx    www-data      4096 Jun  8 07:05 ..
-rw-r-----    1 nobody   nobody       63036 Jun 15 09:45 IP_changes.log
-rw-rw-rw-    1 nobody   nobody        2971 Jun 14 23:39 app.conf
-rw-rw-rw-    1 nobody   nobody      978944 Jun 15 09:40 app.db
-rw-r-----    1 nobody   nobody       32768 Jun 15 09:45 app.db-shm
...

As you can tell those files are rw by everyone. I did alter the app.conf to see if I could wake it up.

I need to step out, but I will get back to this later today (pacific time).

One last thing. Here is what my app.config looks like:

#                                                    #
#         Generated:  2022-12-30_22-19-40            #
#                                                    #
#   Config file for the LAN intruder detection app:  #
#      https://github.com/jokob-sk/NetAlertX          #
#                                                    #
#-----------------AUTOGENERATED FILE-----------------#

# 🔺 Use the Settings UI - only edit when necessary 🔺

# General
#---------------------------
# Scan using interface eth0
# SCAN_SUBNETS    = ['192.168.1.0/24 --interface=eth0']
#
# Scan multiple interfaces (eth1 and eth0):
# SCAN_SUBNETS    = [ '192.168.1.0/24 --interface=eth1', '192.168.1.0/24 --interface=eth0' ]

SCAN_SUBNETS=['192.168.86.0/24 --interface=eth0']

TIMEZONE='America/Los_Angeles'
DAYS_TO_KEEP_EVENTS=90
# Used for generating links in emails. Make sure not to add a trailing slash!
REPORT_DASHBOARD_URL='https://netalertx.<redacted>.net'

# Email
#---------------------------
SMTP_RUN='disabled'  # use 'on_notification' to enable
SMTP_SERVER='smtp.gmail.com'
SMTP_PORT=587
SMTP_REPORT_TO='user@gmail.com'
SMTP_REPORT_FROM='NetAlertX <user@gmail.com>'
SMTP_SKIP_LOGIN=False
SMTP_USER='user@gmail.com'
SMTP_PASS='password'
SMTP_SKIP_TLS=False

# Webhooks
#---------------------------
WEBHOOK_RUN='disabled'  # use 'on_notification' to enable
WEBHOOK_URL='http://n8n.local:5555/webhook-test/aaaaaaaa-aaaa-aaaa-aaaaa-aaaaaaaaaaaa'
WEBHOOK_PAYLOAD='json'                 # webhook payload data format for the "body > attachements > text" attribute 
                                       # in https://github.com/jokob-sk/NetAlertX/blob/main/docs/webhook_json_sample.json 
                                       #   supported values: 'json', 'html' or 'text'
                                       #   e.g.: for discord use 'html'
WEBHOOK_REQUEST_METHOD='GET'

# Apprise
#---------------------------
APPRISE_RUN='disabled'  # use 'on_notification' to enable
APPRISE_HOST='http://localhost:8000/notify'
APPRISE_URL='mailto://smtp-relay.sendinblue.com:587?from=user@gmail.com&name=apprise&user=user@gmail.com&pass=password&to=user@gmail.com'

# NTFY
#---------------------------
NTFY_RUN='disabled'  # use 'on_notification' to enable
NTFY_HOST='https://ntfy.sh'
NTFY_TOPIC='replace_my_secure_topicname_91h889f28'
NTFY_USER='user'
NTFY_PASSWORD='passw0rd'

# PUSHSAFER
#---------------------------
PUSHSAFER_RUN='disabled'  # use 'on_notification' to enable
PUSHSAFER_TOKEN='ApiKey'

# MQTT
#---------------------------
MQTT_RUN='disabled'  # use 'on_notification' to enable
MQTT_BROKER='192.168.1.2'
MQTT_PORT=1883
MQTT_USER='mqtt'
MQTT_PASSWORD='passw0rd'
MQTT_QOS=0
MQTT_DELAY_SEC=2

#-------------------IMPORTANT INFO-------------------#
#   This file is ingested by a python script, so if  #
#        modified it needs to use python syntax      #
#-------------------IMPORTANT INFO-------------------#
dvystrcil commented 2 weeks ago

It's a permissions issue.

The first listing was before I changed permissions. Then you see where I opened it wide, and the I started seeing the page.

So this is probably a me thing rather than a you issue because I am using nfs pvcs with my K8s. Permissions can get messed up. Also, the LSIO images lend themselves more to docker runs rather than running on K8s.

netalertx-656cf6b867-dfdwl:/# ls -la /app/front/log/
total 3369
drwxr-x---    2 nobody   nobody          20 Jun 15 10:04 .
drwxr-x---    1 nginx    www-data      4096 Jun 15 10:04 ..
-rw-r-----    1 nobody   nobody       63614 Jun 15 11:11 IP_changes.log
-rw-rw-rw-    1 nobody   nobody        2963 Jun 15 10:04 app.conf
-rw-rw-rw-    1 nobody   nobody      622592 Jun 15 10:40 app.db
-rw-r-----    1 nobody   nobody       32768 Jun 15 11:11 app.db-shm
-rw-r-----    1 nobody   nobody     5520832 Jun 15 11:11 app.db-wal
-rw-r-----    1 nobody   nobody    11324837 Jun 15 11:13 app.log
-rw-r-----    1 nobody   nobody           0 Jun 15 10:04 app.php_errors.log
-rw-r-----    1 nobody   nobody           0 Jun 15 10:04 app_front.log
-rw-r-----    1 nobody   nobody           0 Jun 15 10:04 db_is_locked.log
-rw-r-----    1 nobody   nobody        2230 Jun 12 02:00 devices_20240612020052.csv
-rw-r-----    1 nobody   nobody           0 Jun 15 10:04 execution_queue.log
-rw-r-----    1 nobody   nobody         320 Jun 15 10:06 pholus_lastrun.log
-rw-r-----    1 nobody   nobody           0 Jun  8 22:45 pholus_subp_pr.log
-rw-r-----    1 nobody   nobody       10928 Jun 15 10:06 report_output.html
-rw-r-----    1 nobody   nobody         567 Jun 15 11:13 report_output.json
-rw-r-----    1 nobody   nobody        1970 Jun 15 10:06 report_output.txt
-rw-r-----    1 nobody   nobody           0 Jun 15 10:04 stderr.log
-rw-r-----    1 nobody   nobody           0 Jun 15 10:04 stdout.log
netalertx-656cf6b867-dfdwl:/# chmod -R 777 /app/front/log/
netalertx-656cf6b867-dfdwl:/# ls -la /app/front/log/
total 3370
drwxrwxrwx    2 nobody   nobody          20 Jun 15 10:04 .
drwxr-x---    1 nginx    www-data      4096 Jun 15 10:04 ..
-rwxrwxrwx    1 nobody   nobody       63614 Jun 15 11:11 IP_changes.log
-rwxrwxrwx    1 nobody   nobody        2963 Jun 15 10:04 app.conf
-rwxrwxrwx    1 nobody   nobody      622592 Jun 15 10:40 app.db
-rwxrwxrwx    1 nobody   nobody       32768 Jun 15 11:11 app.db-shm
-rwxrwxrwx    1 nobody   nobody     5520832 Jun 15 11:11 app.db-wal
-rwxrwxrwx    1 nobody   nobody    11325512 Jun 15 11:14 app.log
-rwxrwxrwx    1 nobody   nobody           0 Jun 15 10:04 app.php_errors.log
-rwxrwxrwx    1 nobody   nobody           0 Jun 15 10:04 app_front.log
-rwxrwxrwx    1 nobody   nobody           0 Jun 15 10:04 db_is_locked.log
-rwxrwxrwx    1 nobody   nobody        2230 Jun 12 02:00 devices_20240612020052.csv
-rwxrwxrwx    1 nobody   nobody           0 Jun 15 10:04 execution_queue.log
-rwxrwxrwx    1 nobody   nobody         320 Jun 15 10:06 pholus_lastrun.log
-rwxrwxrwx    1 nobody   nobody           0 Jun  8 22:45 pholus_subp_pr.log
-rwxrwxrwx    1 nobody   nobody       10928 Jun 15 10:06 report_output.html
-rwxrwxrwx    1 nobody   nobody         567 Jun 15 11:14 report_output.json
-rwxrwxrwx    1 nobody   nobody        1970 Jun 15 10:06 report_output.txt
-rwxrwxrwx    1 nobody   nobody           0 Jun 15 10:04 stderr.log
-rwxrwxrwx    1 nobody   nobody           0 Jun 15 10:04 stdout.log

Thanks for walking me through this. I am not sure if any of this will help the OP, but I hope so.

jokob-sk commented 2 weeks ago

Thanks @dvystrcil for the update!

Glad to hear the app is working for yo unow :)

I added more troubleshooting steps to the https://github.com/jokob-sk/NetAlertX/blob/main/docs/WEB_UI_PORT_DEBUG.md guide and also linked it to the permissions troubleshooting guide where I explicitly mention file ownership. Hope this will help others troubleshoot this quicker.

joel72265 commented 2 weeks ago

hi, my apologies for the late reply. I was going through the steps shared above, I am not as knowledgeable but here are my results.

root@netalertx [ ~ ]# docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
CONTAINER ID   NAMES       PORTS
e5425b3d7f1b   netalertx

root@netalertx [ ~ ]# netstat -tulpn | grep :20211
root@netalertx [ ~ ]# netstat -tulpn | grep :20212  (port was changed, as suggested)
tcp        0      0 0.0.0.0:20212           0.0.0.0:*               LISTEN      946/nginx: master p

root@netalertx [ ~ ]# docker exec -it netalertx /bin/bash
netalertx:/# sudo lsof -i
1       /bin/s6-svscan  0       /dev/null
1       /bin/s6-svscan  1       pipe:[16809]
1       /bin/s6-svscan  2       pipe:[16810]
1       /bin/s6-svscan  3       /run/service/.s6-svscan/lock
1       /bin/s6-svscan  5       /run/service/.s6-svscan/control
1       /bin/s6-svscan  6       /run/service/.s6-svscan/control
1       /bin/s6-svscan  7       anon_inode:[signalfd]
16      /bin/s6-supervise       0       /dev/null
16      /bin/s6-supervise       1       pipe:[16809]
16      /bin/s6-supervise       2       pipe:[16810]
16      /bin/s6-supervise       3       /run/service/s6-linux-init-shutdownd/supervise/lock
16      /bin/s6-supervise       4       /run/service/s6-linux-init-shutdownd/supervise/control
16      /bin/s6-supervise       5       /run/service/s6-linux-init-shutdownd/supervise/control
16      /bin/s6-supervise       6       anon_inode:[signalfd]
19      /usr/bin/s6-linux-init-shutdownd        0       /dev/null
19      /usr/bin/s6-linux-init-shutdownd        1       pipe:[16809]
19      /usr/bin/s6-linux-init-shutdownd        2       pipe:[16810]
19      /usr/bin/s6-linux-init-shutdownd        4       /run/service/s6-linux-init-shutdownd/fifo
19      /usr/bin/s6-linux-init-shutdownd        5       /run/service/s6-linux-init-shutdownd/fifo
25      /bin/s6-supervise       0       /dev/null
25      /bin/s6-supervise       1       pipe:[16809]
25      /bin/s6-supervise       2       pipe:[16810]
25      /bin/s6-supervise       3       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/s6rc-fdholder/supervise/lock
25      /bin/s6-supervise       4       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/s6rc-fdholder/supervise/control
25      /bin/s6-supervise       5       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/s6rc-fdholder/supervise/control
25      /bin/s6-supervise       6       anon_inode:[signalfd]
26      /bin/s6-supervise       0       /dev/null
26      /bin/s6-supervise       1       pipe:[16809]
26      /bin/s6-supervise       2       pipe:[16810]
26      /bin/s6-supervise       3       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/nginx/supervise/lock
26      /bin/s6-supervise       4       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/nginx/supervise/control
26      /bin/s6-supervise       5       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/nginx/supervise/control
26      /bin/s6-supervise       6       anon_inode:[signalfd]
27      /bin/s6-supervise       0       /dev/null
27      /bin/s6-supervise       1       pipe:[16809]
27      /bin/s6-supervise       2       pipe:[16810]
27      /bin/s6-supervise       3       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/php-fpm/supervise/lock
27      /bin/s6-supervise       4       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/php-fpm/supervise/control
27      /bin/s6-supervise       5       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/php-fpm/supervise/control
27      /bin/s6-supervise       6       anon_inode:[signalfd]
28      /bin/s6-supervise       0       /dev/null
28      /bin/s6-supervise       1       pipe:[16809]
28      /bin/s6-supervise       2       pipe:[16810]
28      /bin/s6-supervise       3       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/netalertx/supervise/lock
28      /bin/s6-supervise       4       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/netalertx/supervise/control
28      /bin/s6-supervise       5       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/netalertx/supervise/control
28      /bin/s6-supervise       6       anon_inode:[signalfd]
29      /bin/s6-supervise       0       /dev/null
29      /bin/s6-supervise       1       pipe:[16809]
29      /bin/s6-supervise       2       pipe:[16810]
29      /bin/s6-supervise       3       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/s6rc-oneshot-runner/supervise/lock
29      /bin/s6-supervise       4       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/s6rc-oneshot-runner/supervise/control
29      /bin/s6-supervise       5       /run/s6-rc:s6-rc-init:ilGgHj/servicedirs/s6rc-oneshot-runner/supervise/control
29      /bin/s6-supervise       6       anon_inode:[signalfd]
37      /bin/s6-ipcserverd      0       socket:[16984]
37      /bin/s6-ipcserverd      2       pipe:[16809]
37      /bin/s6-ipcserverd      3       anon_inode:[signalfd]
90      /usr/sbin/php-fpm83     0       /dev/null
90      /usr/sbin/php-fpm83     1       /dev/null
90      /usr/sbin/php-fpm83     2       /var/log/php83/error.log
90      /usr/sbin/php-fpm83     3       pipe:[16810]
90      /usr/sbin/php-fpm83     4       /var/log/php83/error.log
90      /usr/sbin/php-fpm83     5       socket:[17052]
90      /usr/sbin/php-fpm83     6       socket:[17053]
90      /usr/sbin/php-fpm83     7       socket:[17054]
90      /usr/sbin/php-fpm83     8       anon_inode:[eventpoll]
94      /usr/sbin/nginx 0       /dev/null
94      /usr/sbin/nginx 1       pipe:[16809]
94      /usr/sbin/nginx 2       /var/log/nginx/error.log
94      /usr/sbin/nginx 3       socket:[17044]
94      /usr/sbin/nginx 4       /var/log/nginx/error.log
94      /usr/sbin/nginx 5       /var/log/nginx/access.log
94      /usr/sbin/nginx 6       socket:[17043]
94      /usr/sbin/nginx 7       socket:[17045]
98      /usr/bin/python3.12     0       /dev/null
98      /usr/bin/python3.12     1       pipe:[16809]
98      /usr/bin/python3.12     2       pipe:[16810]
98      /usr/bin/python3.12     3       /app/db/app.db
98      /usr/bin/python3.12     4       /app/db/app.db-wal
98      /usr/bin/python3.12     5       /app/db/app.db-shm
572     /bin/bash       0       /dev/pts/0
572     /bin/bash       1       /dev/pts/0
572     /bin/bash       2       /dev/pts/0
572     /bin/bash       255     /dev/pts/0
597     /usr/bin/sudo   0       /dev/pts/0
597     /usr/bin/sudo   1       /dev/pts/0
597     /usr/bin/sudo   2       /dev/pts/0
597     /usr/bin/sudo   3       pipe:[19960]
597     /usr/bin/sudo   4       pipe:[19960]
597     /usr/bin/sudo   5       /etc/sudoers
597     /usr/bin/sudo   6       socket:[19962]
597     /usr/bin/sudo   7       /dev/tty
597     /usr/bin/sudo   8       /dev/pts/ptmx
597     /usr/bin/sudo   10      socket:[19963]
598     /usr/bin/sudo   0       /dev/pts/0
598     /usr/bin/sudo   1       /dev/pts/0
598     /usr/bin/sudo   2       /dev/pts/0
598     /usr/bin/sudo   3       pipe:[19960]
598     /usr/bin/sudo   4       pipe:[19960]
598     /usr/bin/sudo   5       /etc/sudoers
598     /usr/bin/sudo   6       socket:[19962]
598     /usr/bin/sudo   8       pipe:[19967]
598     /usr/bin/sudo   9       /dev/pts/1
598     /usr/bin/sudo   10      pipe:[19967]
598     /usr/bin/sudo   11      socket:[19964]

Inside the container,

netalertx:/# curl localhost:20212
(I get the same output as dvystrcil but for some reason it is not showing in preview mode)
<!-- NetAlertX CSS -->
<link rel="stylesheet" href="css/app.css">

netalertx:/# cat /var/log/nginx/error.log
2024/06/11 12:46:41 [emerg] 701#701: bind() to 0.0.0.0:20212 failed (98: Address in use)
2024/06/11 12:46:41 [emerg] 701#701: bind() to 0.0.0.0:20212 failed (98: Address in use)
2024/06/11 12:46:41 [emerg] 701#701: bind() to 0.0.0.0:20212 failed (98: Address in use)
2024/06/11 12:46:41 [emerg] 701#701: bind() to 0.0.0.0:20212 failed (98: Address in use)
2024/06/11 12:46:41 [emerg] 701#701: bind() to 0.0.0.0:20212 failed (98: Address in use)
2024/06/11 12:46:41 [emerg] 701#701: still could not bind()
2024/06/20 10:17:32 [error] 114#114: *16 FastCGI sent in stderr: "PHP message: PHP Warning:  Undefined array key 0 in /app/front/php/templates/security.php on line 36; PHP message: PHP Warning:  Undefined array key 1 in /app/front/php/templates/security.php on line 37; PHP message: PHP Warning:  Undefined array key 0 in /app/front/php/templates/security.php on line 44; PHP message: PHP Warning:  Undefined array key 1 in /app/front/php/templates/security.php on line 45" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.3-fpm.sock:", host: "localhost:20212"

netalertx:/# ls -la /app/config
total 16
drwxr-x---    2 nginx    www-data      4096 Jun 20 10:09 .
drwxr-x---    1 nginx    www-data      4096 Jun  8 18:05 ..
-rw-rw-rw-    1 nginx    www-data      2947 Jun 20 10:09 app.conf
jokob-sk commented 2 weeks ago

Hi @joel72265 ,

  1. Could you please provide output of ls -la /app/db?
  2. Can you try not to mount your volumes for testing? If the app starts without you mounting your own endpoints then the issue is permission-related on your host and the /config and /db folders.
  3. Can you check the permissions and ownership align with this guide? https://github.com/jokob-sk/NetAlertX/blob/main/docs/FILE_PERMISSIONS.md
joel72265 commented 2 weeks ago

1, 3.

netalertx:/# ls -la /app/db total 4444 drwxr-x--- 2 nginx www-data 4096 Jun 11 12:37 . drwxr-x--- 1 nginx www-data 4096 Jun 8 18:05 .. -rw-rw-rw- 1 nginx www-data 372736 Jun 20 11:05 app.db -rw-r----- 1 nginx www-data 32768 Jun 20 12:00 app.db-shm -rw-r----- 1 nginx www-data 4128272 Jun 20 12:00 app.db-wal

netalertx:/# ls -la /app/config total 16 drwxr-x--- 2 nginx www-data 4096 Jun 20 10:39 . drwxr-x--- 1 nginx www-data 4096 Jun 8 18:05 .. -rw-rw-rw- 1 nginx www-data 2947 Jun 20 10:39 app.conf

netalertx:/# cat /var/log/nginx/error.log netalertx:/# error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.41/exec/0ab0a8a8dfbce50932f62b852378228173e7eb37d1fe3b5adc62d9a9172c4c0b/json": read unix @->/run/docker.sock: read: connection reset by peer

  1. Why do you say that I am mounting the volume, I don't believe so, netalertx is the only container running in the OS and all the netalertx files are stored in the folder /docker/netalertx

jokob-sk commented 2 weeks ago

Could you please try to run the container with this command to see if this works?

docker run -d --rm --network=host \ -e TZ=Europe/Berlin \ -e PORT=20222 \ jokobsk/netalertx:latest

joel72265 commented 2 weeks ago

sure, please see the output from the commands used before

root@netalertx [ ~ ]# docker run -d --rm --name=netalertx --network=host -e TZ=Europe/Berlin -e PORT=20222 jokobsk/netalertx:latest 3b7735040a2421f19fb90e9f4d9144b8bc4e173c942e3978ecf379f14eff32de

root@netalertx [ ~ ]# docker container list CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3b7735040a24 jokobsk/netalertx:latest "/init" 2 hours ago Up 2 hours (healthy) netalertx

root@netalertx [ ~ ]# netstat -tulpn | grep :20222 tcp 0 0 0.0.0.0:20222 0.0.0.0:* LISTEN 12757/nginx: master

root@netalertx [ ~ ]# docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a CONTAINER ID NAMES PORTS 3b7735040a24 netalertx

root@netalertx [ ~ ]# docker exec -it netalertx /bin/bash netalertx:/# curl localhost:20222

netalertx:/# cat /var/log/nginx/error.log 2024/06/20 13:22:34 [error] 103#103: *253 FastCGI sent in stderr: "PHP message: PHP Warning: Undefined array key 0 in /app/front/php/templates/security.php on line 36; PHP message: PHP Warning: Undefined array key 1 in /app/front/php/templates/security.php on line 37; PHP message: PHP Warning: Undefined array key 0 in /app/front/php/templates/security.php on line 44; PHP message: PHP Warning: Undefined array key 1 in /app/front/php/templates/security.php on line 45" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.3-fpm.sock:", host: "localhost:20222"

netalertx:/# ls -la /app/config total 16 drwxr-x--- 1 nginx www-data 4096 Jun 20 11:14 . drwxr-x--- 1 nginx www-data 4096 Jun 8 16:05 .. -rw-rw-rw- 1 nginx www-data 2950 Jun 20 11:14 app.conf

netalertx:/# ls -la /app/db total 4456 drwxr-x--- 1 nginx www-data 4096 Jun 20 11:14 . drwxr-x--- 1 nginx www-data 4096 Jun 8 16:05 .. -rw-rw-rw- 1 nginx www-data 380928 Jun 20 12:05 app.db -rw-rw-rw- 1 nginx www-data 32768 Jun 20 13:20 app.db-shm -rw-rw-rw- 1 nginx www-data 4132392 Jun 20 13:20 app.db-wal

netalertx:/# sudo lsof -i 1 /bin/s6-svscan 0 /dev/null 1 /bin/s6-svscan 1 pipe:[59089] 1 /bin/s6-svscan 2 pipe:[59090] 1 /bin/s6-svscan 3 /run/service/.s6-svscan/lock 1 /bin/s6-svscan 5 /run/service/.s6-svscan/control 1 /bin/s6-svscan 6 /run/service/.s6-svscan/control 1 /bin/s6-svscan 7 anon_inode:[signalfd] 18 /bin/s6-supervise 0 /dev/null 18 /bin/s6-supervise 1 pipe:[59089] 18 /bin/s6-supervise 2 pipe:[59090] 18 /bin/s6-supervise 3 /run/service/s6-linux-init-shutdownd/supervise/lock 18 /bin/s6-supervise 4 /run/service/s6-linux-init-shutdownd/supervise/control 18 /bin/s6-supervise 5 /run/service/s6-linux-init-shutdownd/supervise/control 18 /bin/s6-supervise 6 anon_inode:[signalfd] 20 /usr/bin/s6-linux-init-shutdownd 0 /dev/null 20 /usr/bin/s6-linux-init-shutdownd 1 pipe:[59089] 20 /usr/bin/s6-linux-init-shutdownd 2 pipe:[59090] 20 /usr/bin/s6-linux-init-shutdownd 4 /run/service/s6-linux-init-shutdownd/fifo 20 /usr/bin/s6-linux-init-shutdownd 5 /run/service/s6-linux-init-shutdownd/fifo 27 /bin/s6-supervise 0 /dev/null 27 /bin/s6-supervise 1 pipe:[59089] 27 /bin/s6-supervise 2 pipe:[59090] 27 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/s6rc-fdholder/supervise/lock 27 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/s6rc-fdholder/supervise/control 27 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/s6rc-fdholder/supervise/control 27 /bin/s6-supervise 6 anon_inode:[signalfd] 28 /bin/s6-supervise 0 /dev/null 28 /bin/s6-supervise 1 pipe:[59089] 28 /bin/s6-supervise 2 pipe:[59090] 28 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/nginx/supervise/lock 28 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/nginx/supervise/control 28 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/nginx/supervise/control 28 /bin/s6-supervise 6 anon_inode:[signalfd] 29 /bin/s6-supervise 0 /dev/null 29 /bin/s6-supervise 1 pipe:[59089] 29 /bin/s6-supervise 2 pipe:[59090] 29 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/php-fpm/supervise/lock 29 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/php-fpm/supervise/control 29 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/php-fpm/supervise/control 29 /bin/s6-supervise 6 anon_inode:[signalfd] 30 /bin/s6-supervise 0 /dev/null 30 /bin/s6-supervise 1 pipe:[59089] 30 /bin/s6-supervise 2 pipe:[59090] 30 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/netalertx/supervise/lock 30 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/netalertx/supervise/control 30 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/netalertx/supervise/control 30 /bin/s6-supervise 6 anon_inode:[signalfd] 31 /bin/s6-supervise 0 /dev/null 31 /bin/s6-supervise 1 pipe:[59089] 31 /bin/s6-supervise 2 pipe:[59090] 31 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/s6rc-oneshot-runner/supervise/lock 31 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/s6rc-oneshot-runner/supervise/control 31 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:jIdpkp/servicedirs/s6rc-oneshot-runner/supervise/control 31 /bin/s6-supervise 6 anon_inode:[signalfd] 39 /bin/s6-ipcserverd 0 socket:[59196] 39 /bin/s6-ipcserverd 2 pipe:[59089] 39 /bin/s6-ipcserverd 3 anon_inode:[signalfd] 86 /usr/sbin/php-fpm83 0 /dev/null 86 /usr/sbin/php-fpm83 1 /dev/null 86 /usr/sbin/php-fpm83 2 /var/log/php83/error.log 86 /usr/sbin/php-fpm83 3 pipe:[59090] 86 /usr/sbin/php-fpm83 4 /var/log/php83/error.log 86 /usr/sbin/php-fpm83 5 socket:[59264] 86 /usr/sbin/php-fpm83 6 socket:[59265] 86 /usr/sbin/php-fpm83 7 socket:[59266] 86 /usr/sbin/php-fpm83 8 anon_inode:[eventpoll] 90 /usr/sbin/nginx 0 /dev/null 90 /usr/sbin/nginx 1 pipe:[59089] 90 /usr/sbin/nginx 2 /var/log/nginx/error.log 90 /usr/sbin/nginx 3 socket:[59252] 90 /usr/sbin/nginx 4 /var/log/nginx/error.log 90 /usr/sbin/nginx 5 /var/log/nginx/access.log 90 /usr/sbin/nginx 6 socket:[59251] 90 /usr/sbin/nginx 7 socket:[59253] 94 /usr/bin/python3.12 0 /dev/null 94 /usr/bin/python3.12 1 pipe:[59089] 94 /usr/bin/python3.12 2 pipe:[59090] 94 /usr/bin/python3.12 3 /app/db/app.db 94 /usr/bin/python3.12 4 /app/db/app.db-wal 94 /usr/bin/python3.12 5 /app/db/app.db-shm 5165 /bin/bash 0 /dev/pts/0 5165 /bin/bash 1 /dev/pts/0 5165 /bin/bash 2 /dev/pts/0 5165 /bin/bash 255 /dev/pts/0 5369 /usr/bin/sudo 0 /dev/pts/0 5369 /usr/bin/sudo 1 /dev/pts/0 5369 /usr/bin/sudo 2 /dev/pts/0 5369 /usr/bin/sudo 3 pipe:[86889] 5369 /usr/bin/sudo 4 pipe:[86889] 5369 /usr/bin/sudo 5 /etc/sudoers 5369 /usr/bin/sudo 6 socket:[86891] 5369 /usr/bin/sudo 7 /dev/tty 5369 /usr/bin/sudo 8 /dev/pts/ptmx 5369 /usr/bin/sudo 10 socket:[86892] 5370 /usr/bin/sudo 0 /dev/pts/0 5370 /usr/bin/sudo 1 /dev/pts/0 5370 /usr/bin/sudo 2 /dev/pts/0 5370 /usr/bin/sudo 3 pipe:[86889] 5370 /usr/bin/sudo 4 pipe:[86889] 5370 /usr/bin/sudo 5 /etc/sudoers 5370 /usr/bin/sudo 6 socket:[86891] 5370 /usr/bin/sudo 8 pipe:[86896] 5370 /usr/bin/sudo 9 /dev/pts/1 5370 /usr/bin/sudo 10 pipe:[86896] 5370 /usr/bin/sudo 11 socket:[86893]

netalertx-03

jokob-sk commented 2 weeks ago

Hi @joel72265 ,

Thanks for the detailed information.

Just FYI the Undefined array key 1 in /app/front/php/templates/security.php on line 37 errors don't prevent the app from starting - it's just an indication of the PWD setting not be available in the conf file.

  1. What jumps out is that sudo lsof -i in the container doesn't seem to produce the expected output. Here is what the output should look like. Could you please double check and try to run the nginx server in the container?
# sudo lsof -i
COMMAND   PID  USER FD   TYPE     DEVICE SIZE/OFF NODE NAME
nginx     627  root  6u  IPv4  895356069      0t0  TCP *:20211 (LISTEN)
nginx   13272 nginx  6u  IPv4  895356069      0t0  TCP *:20211 (LISTEN)
nginx   13273 nginx  4u  IPv4 1038731119      0t0  TCP unifi:20211->DESKTOP-DIHOG0E.localdomain:60447 (ESTABLISHED)
nginx   13273 nginx  6u  IPv4  895356069      0t0  TCP *:20211 (LISTEN)
nginx   13273 nginx 10u  IPv4 1038744873      0t0  TCP localhost:20211->localhost:48382 (ESTABLISHED)
nginx   13274 nginx  6u  IPv4  895356069      0t0  TCP *:20211 (LISTEN)
nginx   13275 nginx  6u  IPv4  895356069      0t0  TCP *:20211 (LISTEN)
  1. Do you have another server where you could run this container to test this if it's network-related?
  2. The curl command isn't producing the expected result (maybe you pasted only the first lines of the output), but here is the start of my output:
# curl localhost:20211 
<!-- NetAlertX CSS -->
<link rel="stylesheet" href="css/app.css">

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
...continues
  1. Could you confirm you tried different browsers?
  2. Could you please a screenshot of your network tab in the browser DEV tolls (usually F12)?
  3. Can you try to curl an API endpoint (in and outside the container)? e.g. curl localhost:20211/api/app_state.json The output should look something like this:
#curl localhost:20211/api/app_state.json
{
    "currentState": "Plugins: NSLOOKUP",
    "lastUpdated": "2024-06-20 17:52:30+10:00",
    "settingsSaved": 1718680610.8368986,
    "settingsImported": 1718680610.8368986,
    "showSpinner": false,
    "isNewVersion": true,
    "isNewVersionChecked": 1713348596
}
  1. Could you check if PHP is running correctly?
# ps aux | grep php
   25 root      0:00 s6-supervise php-fpm
  623 root      0:11 {php-fpm83} php-fpm: master process (/etc/php83/php-fpm.conf)
  651 nginx     2:17 {php-fpm83} php-fpm: pool www
  652 nginx     2:16 {php-fpm83} php-fpm: pool www
  893 nginx     2:12 {php-fpm83} php-fpm: pool www
 8124 root      0:00 grep PHP
  1. What is your container host system? Just because Windows/MAC doesn't support --network=host if I remember correctly. (more info here https://github.com/jokob-sk/NetAlertX/issues/558 & https://github.com/jokob-sk/NetAlertX/issues/525)

I hope looking into some of these will bring us closer to solving this. j

joel72265 commented 2 weeks ago

Thank you for checking, please find the output of the information you requested

1)

root@netalertx [ ~ ]# docker run -d --rm --name=netalertx --network=host -e TZ=Europe/Berlin -e PORT=20211 jokobsk/netalertx:latest 47cc3f8c46a6afca0cd274afa16985321cfd9f361db3e0e58a4b3fd86745cfe3 root@netalertx [ ~ ]# docker exec -it netalertx /bin/bash netalertx:/# sudo lsof -i 1 /bin/s6-svscan 0 /dev/null 1 /bin/s6-svscan 1 pipe:[213245] 1 /bin/s6-svscan 2 pipe:[213246] 1 /bin/s6-svscan 3 /run/service/.s6-svscan/lock 1 /bin/s6-svscan 5 /run/service/.s6-svscan/control 1 /bin/s6-svscan 6 /run/service/.s6-svscan/control 1 /bin/s6-svscan 7 anon_inode:[signalfd] 18 /bin/s6-supervise 0 /dev/null 18 /bin/s6-supervise 1 pipe:[213245] 18 /bin/s6-supervise 2 pipe:[213246] 18 /bin/s6-supervise 3 /run/service/s6-linux-init-shutdownd/supervise/lock 18 /bin/s6-supervise 4 /run/service/s6-linux-init-shutdownd/supervise/control 18 /bin/s6-supervise 5 /run/service/s6-linux-init-shutdownd/supervise/control 18 /bin/s6-supervise 6 anon_inode:[signalfd] 20 /usr/bin/s6-linux-init-shutdownd 0 /dev/null 20 /usr/bin/s6-linux-init-shutdownd 1 pipe:[213245] 20 /usr/bin/s6-linux-init-shutdownd 2 pipe:[213246] 20 /usr/bin/s6-linux-init-shutdownd 4 /run/service/s6-linux-init-shutdownd/fifo 20 /usr/bin/s6-linux-init-shutdownd 5 /run/service/s6-linux-init-shutdownd/fifo 27 /bin/s6-supervise 0 /dev/null 27 /bin/s6-supervise 1 pipe:[213245] 27 /bin/s6-supervise 2 pipe:[213246] 27 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/s6rc-fdholder/supervise/lock 27 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/s6rc-fdholder/supervise/control 27 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/s6rc-fdholder/supervise/control 27 /bin/s6-supervise 6 anon_inode:[signalfd] 28 /bin/s6-supervise 0 /dev/null 28 /bin/s6-supervise 1 pipe:[213245] 28 /bin/s6-supervise 2 pipe:[213246] 28 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/nginx/supervise/lock 28 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/nginx/supervise/control 28 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/nginx/supervise/control 28 /bin/s6-supervise 6 anon_inode:[signalfd] 29 /bin/s6-supervise 0 /dev/null 29 /bin/s6-supervise 1 pipe:[213245] 29 /bin/s6-supervise 2 pipe:[213246] 29 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/php-fpm/supervise/lock 29 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/php-fpm/supervise/control 29 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/php-fpm/supervise/control 29 /bin/s6-supervise 6 anon_inode:[signalfd] 30 /bin/s6-supervise 0 /dev/null 30 /bin/s6-supervise 1 pipe:[213245] 30 /bin/s6-supervise 2 pipe:[213246] 30 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/netalertx/supervise/lock 30 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/netalertx/supervise/control 30 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/netalertx/supervise/control 30 /bin/s6-supervise 6 anon_inode:[signalfd] 31 /bin/s6-supervise 0 /dev/null 31 /bin/s6-supervise 1 pipe:[213245] 31 /bin/s6-supervise 2 pipe:[213246] 31 /bin/s6-supervise 3 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/s6rc-oneshot-runner/supervise/lock 31 /bin/s6-supervise 4 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/s6rc-oneshot-runner/supervise/control 31 /bin/s6-supervise 5 /run/s6-rc:s6-rc-init:nDDFkD/servicedirs/s6rc-oneshot-runner/supervise/control 31 /bin/s6-supervise 6 anon_inode:[signalfd] 39 /bin/s6-ipcserverd 0 socket:[213352] 39 /bin/s6-ipcserverd 2 pipe:[213245] 39 /bin/s6-ipcserverd 3 anon_inode:[signalfd] 86 /usr/sbin/php-fpm83 0 /dev/null 86 /usr/sbin/php-fpm83 1 /dev/null 86 /usr/sbin/php-fpm83 2 /var/log/php83/error.log 86 /usr/sbin/php-fpm83 3 pipe:[213246] 86 /usr/sbin/php-fpm83 4 /var/log/php83/error.log 86 /usr/sbin/php-fpm83 5 socket:[213420] 86 /usr/sbin/php-fpm83 6 socket:[213421] 86 /usr/sbin/php-fpm83 7 socket:[213422] 86 /usr/sbin/php-fpm83 8 anon_inode:[eventpoll] 90 /usr/sbin/nginx 0 /dev/null 90 /usr/sbin/nginx 1 pipe:[213245] 90 /usr/sbin/nginx 2 /var/log/nginx/error.log 90 /usr/sbin/nginx 3 socket:[213406] 90 /usr/sbin/nginx 4 /var/log/nginx/error.log 90 /usr/sbin/nginx 5 /var/log/nginx/access.log 90 /usr/sbin/nginx 6 socket:[213405] 90 /usr/sbin/nginx 7 socket:[213407] 94 /usr/bin/python3.12 0 /dev/null 94 /usr/bin/python3.12 1 pipe:[213245] 94 /usr/bin/python3.12 2 pipe:[213246] 94 /usr/bin/python3.12 3 /app/db/app.db 94 /usr/bin/python3.12 4 /app/db/app.db-wal 94 /usr/bin/python3.12 5 /app/db/app.db-shm 2148 /bin/bash 0 /dev/pts/0 2148 /bin/bash 1 /dev/pts/0 2148 /bin/bash 2 /dev/pts/0 2148 /bin/bash 255 /dev/pts/0 2154 /usr/bin/sudo 0 /dev/pts/0 2154 /usr/bin/sudo 1 /dev/pts/0 2154 /usr/bin/sudo 2 /dev/pts/0 2154 /usr/bin/sudo 3 pipe:[223283] 2154 /usr/bin/sudo 4 pipe:[223283] 2154 /usr/bin/sudo 5 /etc/sudoers 2154 /usr/bin/sudo 6 socket:[223285] 2154 /usr/bin/sudo 7 /dev/tty 2154 /usr/bin/sudo 8 /dev/pts/ptmx 2154 /usr/bin/sudo 10 socket:[223286] 2155 /usr/bin/sudo 0 /dev/pts/0 2155 /usr/bin/sudo 1 /dev/pts/0 2155 /usr/bin/sudo 2 /dev/pts/0 2155 /usr/bin/sudo 3 pipe:[223283] 2155 /usr/bin/sudo 4 pipe:[223283] 2155 /usr/bin/sudo 5 /etc/sudoers 2155 /usr/bin/sudo 6 socket:[223285] 2155 /usr/bin/sudo 8 pipe:[223290] 2155 /usr/bin/sudo 9 /dev/pts/1 2155 /usr/bin/sudo 10 pipe:[223290] 2155 /usr/bin/sudo 11 socket:[223287]

netalertx:/# nginx nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) nginx: [emerg] still could not bind()

2)

This is running in a VM, I can migrate it to another server, the page still does not load

3)

netalertx:/# curl localhost:20211 Screenshot attached netalertx-05

4)

Browsers tried, Chome 126.0.6478.114 (64-bit), Firefox v125.0.3-r0

5)

Screenshot attached netalertx-04

6)

INSIDE: netalertx:/# curl localhost:20211/api/app_state.json { "currentState": "Process: Wait", "lastUpdated": "2024-06-21 08:24:32+02:00", "settingsSaved": 1718947481.6130145, "settingsImported": 1718947481.6130145, "showSpinner": false, "isNewVersion": false, "isNewVersionChecked": 1718947483

OUTISIDE: root@netalertx [ ~ ]# curl localhost:20211/api/app_state.json { "currentState": "Process: Wait", "lastUpdated": "2024-06-21 08:26:39+02:00", "settingsSaved": 1718947481.6130145, "settingsImported": 1718947481.6130145, "showSpinner": false, "isNewVersion": false, "isNewVersionChecked": 1718951134

7)

root@netalertx [ ~ ]# ps aux | grep php root 47863 0.0 0.1 1072 588 ? S 09:24 0:00 s6-supervise php-fpm root 47920 0.0 1.9 23360 9620 ? Ss 09:24 0:00 php-fpm: master process (/etc/php83/php-fpm.conf) 101 47945 0.0 0.6 23368 3188 ? S 09:24 0:00 php-fpm: pool www 101 47946 0.0 1.8 23480 9132 ? S 09:24 0:00 php-fpm: pool www root 51901 0.0 0.1 5600 648 pts/0 S+ 10:29 0:00 grep --color=auto php

8)

Operating System: VMware Photon root@netalertx [ ~ ]# cat /etc/photon-release VMware Photon OS 4.0 PHOTON_BUILD_NUMBER=2f5aad892

jokob-sk commented 1 week ago

Hi @joel72265 ,

Thanks for the details.

  1. Do you have any other containers running on --network=host? If not then that might be something your network config might be blocking.
  2. I noticed if I run ps aux | grep php the owner of those PHP processes in nginx, yours shows 101 (?) which I'm not sure what means.
# ps aux | grep php
   25 root      0:00 s6-supervise php-fpm
  623 root      0:11 {php-fpm83} php-fpm: master process (/etc/php83/php-fpm.conf)
  651 nginx     2:17 {php-fpm83} php-fpm: pool www
  652 nginx     2:16 {php-fpm83} php-fpm: pool www
  893 nginx     2:12 {php-fpm83} php-fpm: pool www
 8124 root      0:00 grep PHP

looking at what the nginx process owns:

Synology-NAS:/# ps aux | grep nginx
   29 root      0:00 s6-supervise nginx
  627 root      0:00 nginx: master process nginx -g daemon off;
  651 nginx     2:51 {php-fpm83} php-fpm: pool www
  652 nginx     2:51 {php-fpm83} php-fpm: pool www
  893 nginx     2:48 {php-fpm83} php-fpm: pool www
12430 root      0:00 grep nginx
13272 nginx     0:36 nginx: worker process
13273 nginx     0:36 nginx: worker process
13274 nginx     0:40 nginx: worker process
13275 nginx     0:34 nginx: worker process
  1. Can you try CURL on the OUTISIDE with an IP, not with localhost?

    root@netalertx [ ~ ]# curl 192.170.1.85:20211/api/app_state.json
  2. Firewall rules - You can use iptables or firewalld to check and modify rules.

sudo iptables -L -n -v
sudo firewall-cmd --list-all
  1. You can also test by running the container without the --network=host flag and mapping ports explicitly to see if there is a difference:
docker run -d --rm --name=netalertx -p 20211:20211 -e TZ=Europe/Berlin -e PORT=20211 jokobsk/netalertx:latest

I'm running slowly out of ideas.

Isolate the issue as much as possible. This could be running the container on your desktop/other network to see if the issue is network/environment related. You could ask on VMware Photon-related user groups/forums. I can't reproduce your issue and I don't have the OS/setup you are running so I don't know where to go from here.

joel72265 commented 6 days ago

You got it :) It has something to do with the firewall, I disabled the firewall and the web page came up.

In the older version pialert, the firewall was 'active' but the GUI was accessible. Could you please share what ports, firewall rules I need to add. Suggestion: load any default page even if the firewall is enabled to rule out the firewall. Please find the output of the information you requested.

  1. Do you have any other containers running on --network=host? If not then that might be something your network config might be blocking. No other containers

  2. I noticed if I run ps aux | grep php the owner of those PHP processes in nginx, yours shows 101 (?) which I'm not sure what means. ps aux | grep php ps aux | grep nginx Thank you

  3. Can you try CURL on the OUTISIDE with an IP, not with localhost?

    root@netalertx [ ~ ]# curl 192.170.1.85:20211/api/app_state.json { "currentState": "Process: Wait", "lastUpdated": "2024-06-29 10:54:43+04:00", "settingsSaved": 1719644075.606695, "settingsImported": 1719644075.606695, "showSpinner": false, "isNewVersion": false, "isNewVersionChecked": 1719644080

  4. Firewall rules - You can use iptables or firewalld to check and modify rules. sudo iptables -L -n -v sudo firewall-cmd --list-all

root@netalertx [ ~ ]# cat /etc/systemd/scripts/ip4save Generated by iptables-save v1.8.7 on Wed Feb 15 14:58:34 2023 mangle :PREROUTING ACCEPT [713:58120] :INPUT ACCEPT [712:57882] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [501:64235] :POSTROUTING ACCEPT [501:64235] COMMIT Completed on Wed Feb 15 14:58:34 2023 Generated by iptables-save v1.8.7 on Wed Feb 15 14:58:34 2023 nat :PREROUTING ACCEPT [107:9796] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [2:152] :POSTROUTING ACCEPT [2:152] :DOCKER - [0:0] -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE -A DOCKER -i docker0 -j RETURN COMMIT Completed on Wed Feb 15 14:58:34 2023 Generated by iptables-save v1.8.7 on Wed Feb 15 14:58:34 2023 *filter :INPUT DROP [106:9558] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] :DOCKER - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] :DOCKER-USER - [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p icmp -j ACCEPT -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A OUTPUT -j ACCEPT -A OUTPUT -p icmp -j ACCEPT -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN COMMIT Completed on Wed Feb 15 14:58:34 2023

jokob-sk commented 6 days ago

Hi @joel72265, That's great to hear. Regrettably I'm unsure as I'm not running a firewall on my network as I'm accessing my network via a VPN.

If you figure out which ports are needed, can you please let me know?

joel72265 commented 6 days ago

ok sure, if I figure this out I will get back to you