inverse-inc / packetfence

PacketFence is a fully supported, trusted, Free and Open Source network access control (NAC) solution. Boasting an impressive feature set including a captive-portal for registration and remediation, centralized wired and wireless management, powerful BYOD management options, 802.1X support, layer-2 isolation of problematic devices; PacketFence can be used to effectively secure networks small to very large heterogeneous networks.
https://packetfence.org
GNU General Public License v2.0
1.38k stars 289 forks source link

haproxy-admin does not start in cluster (debian) #6052

Closed knumsi closed 2 years ago

knumsi commented 3 years ago

Last week my PF cluster did not come up after I was trying to add a new network interface and rebooted the system. NAC works, but admin interface is not alive.

System is debian. Original installation was 10.1.0 - now i am on 10.2.0 System does not come up automatically when rebooting. Strangely: After the update from 10.1.0 to 10.2.0 reboots did work without any Problem.

The Error for running the command

root@pf1:/# /usr/local/pf/bin/pfcmd service haproxy-admin restart
Service                                                 Status    PID
Job for packetfence-haproxy-admin.service failed because the control process exited with error code.
See "systemctl status packetfence-haproxy-admin.service" and "journalctl -xe" for details.
packetfence-haproxy-admin.service                       stopped

Then a little more detailed:

root@pf1:/home/pfadmin# systemctl status packetfence-haproxy-admin.service
● packetfence-haproxy-admin.service - PacketFence HAProxy Load Balancer for the Admin GUI
   Loaded: loaded (/lib/systemd/system/packetfence-haproxy-admin.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2020-12-18 20:11:52 CET; 8s ago
  Process: 3210 ExecStart=/usr/sbin/haproxy -Ws -f /usr/local/pf/var/conf/haproxy-admin.conf -p /usr/local/pf/var/run/haproxy-admin.pid (code=exited, status=1/FAILURE)
  Process: 3206 ExecStartPre=/usr/bin/perl -I/usr/local/pf/lib -Mpf::services::manager::haproxy_admin -e pf::services::manager::haproxy_admin->new()->generateConfig() (code=exited, status=0/SUCCESS)
Main PID: 3210 (code=exited, status=1/FAILURE)

Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Unit entered failed state.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Failed with result 'exit-code'.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Service hold-off time over, scheduling restart.
Dez 18 20:11:52 pf01 systemd[1]: Stopped PacketFence HAProxy Load Balancer for the Admin GUI.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Start request repeated too quickly.
Dez 18 20:11:52 pf01 systemd[1]: Failed to start PacketFence HAProxy Load Balancer for the Admin GUI.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Unit entered failed state.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Failed with result 'exit-code'.

After this I tried to execute with debugging:

root@pf1:/home/pfadmin# /usr/sbin/haproxy -f /usr/local/pf/var/conf/haproxy-admin.conf -p /usr/local/pf/var/run/haproxy-admin.pid -d
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:69] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:71] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:73] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:75] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:92] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:94] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:97] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:99] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[ALERT] 352/201254 (3321) : Parsing [/usr/local/pf/var/conf/haproxy-admin.conf:102]: backend 'XXX.XXX.XXX.XXX-admin' has the same name as backend 'XXX.XXX.XXX.XXX-admin' declared at /usr/local/pf/var/conf/haproxy-admin.conf:78.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:151] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:152] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[ALERT] 352/201254 (3321) : Error(s) found in configuration file : /usr/local/pf/var/conf/haproxy-admin.conf
[ALERT] 352/201254 (3321) : Fatal errors found in configuration.
knumsi commented 3 years ago

As this is an issue also mentioned here: https://github.com/inverse-inc/packetfence/issues/5918 but I cannot reopen this issue I am starting a new Issue.

cat /usr/local/pf/var/conf/haproxy-admin.conf (Anonymized. 0 = Cluster IP - 1,2,3 = Nodes 1,2,3)

# This file is generated from a template at /usr/local/pf/conf/haproxy-admin.conf
# Any changes made to this file will be lost on restart

# Copyright (C) Inverse inc.
global
  external-check
  user haproxy
        group haproxy
        daemon
        pidfile /usr/local/pf/var/run/haproxy-admin.pid
        log /dev/log local0
        stats socket /usr/local/pf/var/run/haproxy-admin.stats level admin process 1
        maxconn 4000
        #Followup of https://github.com/inverse-inc/packetfence/pull/893
        #haproxy 1.6.11 | intermediate profile | OpenSSL 1.0.1e | SRC: https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy-1.6.11&openssl=1.0.1e&hsts=yes&profile=intermediate
        #Oldest compatible clients: Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1, Windows XP IE8, Android 2.3, Java 7
        tune.ssl.default-dh-param 2048
        ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
        ssl-default-bind-options no-sslv3 no-tls-tickets
        ssl-default-server-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
        ssl-default-server-options no-sslv3 no-tls-tickets
        #OLD SSL CONFIGURATION. IF RC4 is required or if you must support clients older then the precendent list, comment all the block between this comment and the precedent and uncomment the following line
        #ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
        lua-load /usr/local/pf/var/conf/passthrough_admin.lua

listen stats
  bind  XXX.XXX.XXX.XX1:1027
  mode http
  timeout connect 10s
  timeout client 1m
  timeout server 1m
  stats enable
  stats uri /stats
  stats realm HAProxy\ Statistics
  stats auth admin:packetfence

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client 50000
        timeout server 50000

backend static
    option httpclose
    option http_proxy
    option forwardfor
    http-request set-uri http://127.0.0.1:8891%[path]?%[query]

backend api
        balance source
        option httpclose
        option forwardfor
        errorfile 502 /usr/local/pf/html/pfappserver/root/static/502.json.http
        errorfile 503 /usr/local/pf/html/pfappserver/root/static/503.json.http
        server XXX.XXX.XXX.XX1 XXX.XXX.XXX.XX1:9999 weight 1 maxconn 100 check  ssl verify none

frontend admin-https-XXX.XXX.XXX.XX0
        bind XXX.XXX.XXX.XX0:1443 ssl no-sslv3 crt /usr/local/pf/conf/ssl/server.pem
        errorfile 502 /usr/local/pf/html/pfappserver/root/static/502.json.http
        errorfile 503 /usr/local/pf/html/pfappserver/root/static/503.json.http
        capture request header Host len 40
        reqadd X-Forwarded-Proto:\ https
        http-request lua.change_host
        acl host_exist var(req.host) -m found
        http-request set-header Host %[var(req.host)] if host_exist
        http-response set-header X-Frame-Options SAMEORIGIN
        http-request lua.admin
        use_backend %[var(req.action)]
        http-request redirect location /admin/alt if { lua.redirect 1 }
        default_backend  XXX.XXX.XXX.XX0-admin

backend XXX.XXX.XXX.XX0-admin
        balance source
        option httpclose
        option forwardfor
        server XXX.XXX.XXX.XX1 XXX.XXX.XXX.XX1:1443 check

frontend admin-https-XXX.XXX.XXX.XX1
        bind XXX.XXX.XXX.XX1:1443 ssl no-sslv3 crt /usr/local/pf/conf/ssl/server.pem
        errorfile 502 /usr/local/pf/html/pfappserver/root/static/502.json.http
        errorfile 503 /usr/local/pf/html/pfappserver/root/static/503.json.http
        capture request header Host len 40
        reqadd X-Forwarded-Proto:\ https
        http-request lua.change_host
        acl host_exist var(req.host) -m found
        http-request set-header Host %[var(req.host)] if host_exist
        acl url_api  path_beg /api
        use_backend XXX.XXX.XXX.XX1-api if url_api
        http-request lua.admin
        use_backend %[var(req.action)]
        http-request redirect location /admin/alt if { lua.redirect 1 }
        default_backend  XXX.XXX.XXX.XX0-admin

backend XXX.XXX.XXX.XX0-admin
        balance source
        option httpclose
        option forwardfor
        server XXX.XXX.XXX.XX1 XXX.XXX.XXX.XX1:1443 check

backend 127.0.0.1-netdata
        option httpclose
        option http_proxy
        option forwardfor
        errorfile 502 /usr/local/pf/html/pfappserver/root/static/502.json.http
        errorfile 503 /usr/local/pf/html/pfappserver/root/static/503.json.http
        acl paramsquery query -m found
        http-request lua.admin
        http-request set-uri http://127.0.0.1:19999%[var(req.path)]?%[query] if paramsquery
        http-request set-uri http://127.0.0.1:19999%[var(req.path)] unless paramsquery

backend XXX.XXX.XXX.XX1-netdata
        option httpclose
        option http_proxy
        option forwardfor
        acl paramsquery query -m found
        http-request lua.admin
        http-request set-uri http://XXX.XXX.XXX.XX1:19999%[var(req.path)]?%[query] if paramsquery
        http-request set-uri http://XXX.XXX.XXX.XX1:19999%[var(req.path)] unless paramsquery

backend XXX.XXX.XXX.XX1-api
        balance source
        option httpclose
        option forwardfor
        http-response set-header X-Frame-Options SAMEORIGIN
        errorfile 502 /usr/local/pf/html/pfappserver/root/static/502.json.http
        errorfile 503 /usr/local/pf/html/pfappserver/root/static/503.json.http
        server XXX.XXX.XXX.XX1 XXX.XXX.XXX.XX1:9999 weight 1 maxconn 100 ssl verify none

backend XXX.XXX.XXX.XX0-portal
        option httpclose
        option http_proxy
        option forwardfor
        acl paramsquery query -m found
        http-request set-header Host XXX.XXX.XXX.XX1
        http-request lua.admin
        reqadd X-Forwarded-For-Packetfence:\ 127.0.0.1
        http-request set-uri http://127.0.0.1:8890%[var(req.path)]?%[query] if paramsquery
        http-request set-uri http://127.0.0.1:8890%[var(req.path)] unless paramsquery

cat /usr/local/pf/conf/pf.conf

# Copyright (C) Inverse inc.
[general]
#
# general.hostname
#
# Hostname of PacketFence system.  This is concatenated with the domain in Apache rewriting rules and therefore must be resolvable by clients.
hostname=pf
#
# general.dhcpservers
#
# Comma-delimited list of DHCP servers.  Passthroughs are created to allow DHCP transactions from even "trapped" nodes.
dhcpservers=XX.XX.XX.XX

[network]
#
# network.dhcpoption82logger
#
# If enabled PacketFence will monitor DHCP option82 location-based information.
# This feature is only available if the dhcpdetector is activated.
dhcpoption82logger=enabled

[guests_admin_registration]
#
# guests_admin_registration.access_duration_choices
#
# These are all the choices offered in the guest management interface as
# possible access duration values for a given registration.
access_duration_choices=1D,2D,3D,5D,1W,3M
#
# guests_admin_registration.default_access_duration
#
# This is the default access duration value selected in the dropdown on the
# guest management interface.
default_access_duration=1D

[alerting]
#
# alerting.emailaddr
#
# Comma-delimited list of email addresses to which notifications of rogue DHCP servers, security_events with an action of "email", or any other
# PacketFence-related message goes to.
emailaddr=XXXXXXXXXXXXXXXXXXXXXXXXXXX
#
# alerting.fromaddr
#
# Source email address for email notifications. Empty means root@<server-domain-name>.
fromaddr=XXXXXXXXXXXXXXXXXXXXXXXXXXX
#
# alerting.smtpserver
#
# Server through which to send messages to the above emailaddr.  The default is localhost - be sure you're running an SMTP
# host locally if you don't change it!
smtpserver=XXXXXXXXXXXXXXXXXXXXXXXXXXX

[database]
#
# database.host
#
# Server the MySQL server is running on.
host=127.0.0.1
#
# database.pass
#
# Password for the mysql database used by PacketFence. Changing this parameter after the initial configuration will *not* change it in the database it self, only in the configuration.
pass=XXXXXXXXXXXXXXXXXXXXXXXXXXX

[services]
#
# services.radiusd_acct
#
# Should freeradius handling accounting
radiusd_acct=enabled
#
# services.httpd_admin
#
# Should httpd.admin be started?
httpd_admin=enabled
#
# services.httpd_collector
#
# Should httpd.collector be started?
httpd_collector=enabled
#
# services.snmptrapd
#
# Should snmptrapd be managed by PacketFence?
snmptrapd=enabled
# services.redis_ntlm_cache
#
# Should redis be managed by PacketFence?
redis_ntlm_cache=enabled

[advanced]
#
# advanced.language
#
# Language choice for the communication with administrators
language=de_DE
# advanced.configurator
#
# Enable the Configurator and the Configurator API
configurator=disabled

[webservices]
#
# webservices.user
#
# username to use to connect to the webAPI
user=XXXXXXXXXXXXXXXXXXXXXXXXXXX
#
# webservices.pass
#
# password of the username
pass=XXXXXXXXXXXXXXXXXXXXXXXXXXX

[active_active]
# Change these 2 values by the credentials you've set when configuring MariaDB $
galera_replication_username=XXXXXXXXXXXXXXXXXXXXXXXXXXX
#
# active_active.galera_replication_password
#
# Defines the replication password to be used for the MariaDB Galera cluster replication
galera_replication_password=XXXXXXXXXXXXXXXXXXXXXXXXXXX

[interface ens192]
type=management,high-availability
mask=255.255.0.0
ip=XXX.XXX.XXX.XX1

And more detail on the cluster.conf cat /usr/local/pf/conf/cluster.conf

# Copyright (C) Inverse inc.
# Cluster configuration file for active/active
# This file will have it deactivated by default
# To activate the active/active mode, set a management IP in the cluster section
# Before doing any changes to this file, read the documentation
[CLUSTER]
management_ip=XXX.XXX.XXX.XX0

[CLUSTER interface ens192]
ip=XXX.XXX.XXX.XX0

[pf1]
management_ip=XXX.XXX.XXX.XX1

[pf1 interface ens192]
ip=XXX.XXX.XXX.XX1

[pf2]
management_ip=XXX.XXX.XXX.XX2

[pf2 interface ens192]
ip=XXX.XXX.XXX.XX2

[pf3]
management_ip=XXX.XXX.XXX.XX3

[pf3 interface ens192]
ip=XXX.XXX.XXX.XX3
knumsi commented 3 years ago

Our Workaround is as follows - for each nodes, one after the other - changing some lines in the generator file: https://github.com/inverse-inc/packetfence/blob/v10.2.0/lib/pf/services/manager/haproxy_admin.pm

nano /usr/local/pf/lib/pf/services/manager/haproxy_admin.pm ---> Chage lines 203 and 205 from $mgmt_cluster_ip-admin to $mgmt_ip-admin

Restart service

/usr/local/pf/bin/pfcmd fixpermissions
/usr/local/pf/bin/pfcmd configreload hard
/usr/local/pf/bin/pfcmd service haproxy-admin restart
/usr/local/pf/bin/pfcmd service pf restart

At least the cluster IP/DNS is back online for the management interface.

Bad effect: The nodes are randomly accessible via direct IP/DNS. Very buggy as if haproxy does some random connection killing.

knumsi commented 3 years ago

So the question is: What is the correct config in haproxy for a clustered environment?

julsemaan commented 3 years ago

What is the error haproxy is reporting when trying to start without your workaround ?

I'd bet on a port bind error due to the fact this setting is missing from /etc/sysctl.conf: net.ipv4.ip_nonlocal_bind = 1

knumsi commented 3 years ago

These are the errors (like written in my first post) - for you extracted the Error itself: backend 'XXX.XXX.XXX.XXX-admin' has the same name as backend 'XXX.XXX.XXX.XXX-admin' declared at /usr/local/pf/var/conf/haproxy-admin.conf:78. --> Duplicate Backend definition. But the haproxy-admin.conf is generated by packetfence.

No error when i do the workaround. haproxy config is wrong. No bind error, no config missing:

cat /etc/sysctl.d/01-PacketFence.conf
#### Packetfence Einstellungen
net.ipv4.ip_nonlocal_bind = 1
net.ipv6.conf.all.disable_ipv6 = 1

ERROR

root@pf1:/home/pfadmin# systemctl status packetfence-haproxy-admin.service
● packetfence-haproxy-admin.service - PacketFence HAProxy Load Balancer for the Admin GUI
   Loaded: loaded (/lib/systemd/system/packetfence-haproxy-admin.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2020-12-18 20:11:52 CET; 8s ago
  Process: 3210 ExecStart=/usr/sbin/haproxy -Ws -f /usr/local/pf/var/conf/haproxy-admin.conf -p /usr/local/pf/var/run/haproxy-admin.pid (code=exited, status=1/FAILURE)
  Process: 3206 ExecStartPre=/usr/bin/perl -I/usr/local/pf/lib -Mpf::services::manager::haproxy_admin -e pf::services::manager::haproxy_admin->new()->generateConfig() (code=exited, status=0/SUCCESS)
Main PID: 3210 (code=exited, status=1/FAILURE)

Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Unit entered failed state.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Failed with result 'exit-code'.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Service hold-off time over, scheduling restart.
Dez 18 20:11:52 pf01 systemd[1]: Stopped PacketFence HAProxy Load Balancer for the Admin GUI.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Start request repeated too quickly.
Dez 18 20:11:52 pf01 systemd[1]: Failed to start PacketFence HAProxy Load Balancer for the Admin GUI.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Unit entered failed state.
Dez 18 20:11:52 pf01 systemd[1]: packetfence-haproxy-admin.service: Failed with result 'exit-code'.
root@pf1:/home/pfadmin# /usr/sbin/haproxy -f /usr/local/pf/var/conf/haproxy-admin.conf -p /usr/local/pf/var/run/haproxy-admin.pid -d
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:69] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:71] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:73] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:75] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:92] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:94] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:97] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:99] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[ALERT] 352/201254 (3321) : Parsing [/usr/local/pf/var/conf/haproxy-admin.conf:102]: backend 'XXX.XXX.XXX.XXX-admin' has the same name as backend 'XXX.XXX.XXX.XXX-admin' declared at /usr/local/pf/var/conf/haproxy-admin.conf:78.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:151] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[WARNING] 352/201254 (3321) : parsing [/usr/local/pf/var/conf/haproxy-admin.conf:152] : a 'http-request' rule placed after a 'reqadd' rule will still be processed before.
[ALERT] 352/201254 (3321) : Error(s) found in configuration file : /usr/local/pf/var/conf/haproxy-admin.conf
[ALERT] 352/201254 (3321) : Fatal errors found in configuration.
knumsi commented 3 years ago

An other issue with the same problem was closed https://github.com/inverse-inc/packetfence/issues/5918

FDIT-LKMRBI commented 3 years ago

Same issue here! I did the workaround mentioned by @knumsi and did a /usr/local/pf/bin/pfcmd service haproxy-admin generateconfig after it. Then I restarted all services, but the admin-GUI gives me a HTTP 502 "Proxy Error". I ask the same question as knumsi: Why is the haproxy-admin.conf generated wrong by PacketFence?

knumsi commented 2 years ago

Next week this will be one year open since intial report. Showstopper for a clustered environment.

julsemaan commented 2 years ago

So you're telling me you're seeing this exact issue on v11.1.0 at the moment?

I doubt this issue affects many users since we have onramped many dozens (likely 100+) deployments during the year with no problem

knumsi commented 2 years ago

Well, the issue is: Will the installation break on an update? As there is no fix on this topic it is just a wild guessing if it will work or not. As long as there is no fix I have to assume it has not been fixed.

Also this is not even tagged to be or have been a bug, so why should anybody fix something that is not even a problem? Or did one commit adress this issue but was not linked to solve this issue.

I doubt the mentioned 100+ deployments are as described: Cluster + Debian. But please tell me if I am wrong on this.

3 People are confirming to have (had) this issue. Perhaps somebody on Debian Cluster has already updated to 11.1.0 and can confirm that this is not anymore an issue after the update?

nqb commented 2 years ago

Not relevant anymore.