Open mstilkerich opened 2 years ago
Same problem here, how should we procede? @mstilkerich have you found any workaround?
I fixed it back then by changing the service file. In the meantime, I have moved on to nftables.
I wanted to add that on Ubuntu 22.04 this dependency causes "ordering cycle" issues and CrowdSec fails to start. See the discussion here:
https://discourse.crowdsec.net/t/firewall-bouncer-fails-to-start-systemd-ordering-cycle/1265
I couldn't reproduce the issue. Can you specify
Hi @sabban ,
I just wanted to confirm which issue you are not able to reproduce. I think you are saying that you are unable to reproduce original issue posted by @mstilkerich , correct? Unfortunately, I cannot really help with that as I am not using netfilter/ipset. I did experience "ordering cycle" issue on Ubuntu 22.04 that is mentioned here:
https://discourse.crowdsec.net/t/firewall-bouncer-fails-to-start-systemd-ordering-cycle/1265
It seems to be related to the same dependency and that's why I put a note here.
@mstilkerich @LCerebo @pwsnla
It's most likely an issue on our side. Can you share the configuration you have:
Same problem with Dnsmasq : https://github.com/crowdsecurity/cs-firewall-bouncer/issues/326 Cycling dependencies cause failed start of Dnsmasq and crowdsec services (due to before dependencies to netfilter-persistent).
To reproduce just install Dnsmasq and crowdsec bouncer firewall. At first reboot services failed to start.
I have this issue also.
I commented out the
Before=netfilter-persistent.service
as a workaround.
If I don't comment it out, reboot takes double as much time as it takes usually, and I have these dmesg messages:
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Found ordering cycle on network-online.target/start
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Found dependency on network.target/start
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Found dependency on network-pre.target/start
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Found dependency on netfilter-persistent.service/start
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Found dependency on crowdsec-firewall-bouncer.service/start
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Found dependency on nss-lookup.target/start
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Found dependency on pihole-FTL.service/start
[Mon Oct 16 00:57:03 2023] systemd[1]: pihole-FTL.service: Job network-online.target/start deleted to break ordering cycle starting with pihole-FTL.service/start
[Mon Oct 16 00:57:03 2023] systemd[1]: crowdsec-firewall-bouncer.service: Found ordering cycle on network.target/start
[Mon Oct 16 00:57:03 2023] systemd[1]: crowdsec-firewall-bouncer.service: Found dependency on network-pre.target/start
[Mon Oct 16 00:57:03 2023] systemd[1]: crowdsec-firewall-bouncer.service: Found dependency on netfilter-persistent.service/start
[Mon Oct 16 00:57:03 2023] systemd[1]: crowdsec-firewall-bouncer.service: Found dependency on crowdsec-firewall-bouncer.service/start
[Mon Oct 16 00:57:03 2023] systemd[1]: crowdsec-firewall-bouncer.service: Job network.target/start deleted to break ordering cycle starting with crowdsec-firewall-bouncer.service/start
Crowdsec version:
2023/10/16 01:03:49 version: v1.5.4-debian-pragmatic-arm64-e4dcdd25728b914823525f1efabf18d5c454902b
2023/10/16 01:03:49 Codename: alphaga
2023/10/16 01:03:49 BuildDate: 2023-09-20_12:15:26
2023/10/16 01:03:49 GoVersion: 1.20.5
2023/10/16 01:03:49 Platform: linux
2023/10/16 01:03:49 libre2: C++
2023/10/16 01:03:49 Constraint_parser: >= 1.0, <= 2.0
2023/10/16 01:03:49 Constraint_scenario: >= 1.0, < 3.0
2023/10/16 01:03:49 Constraint_api: v1
2023/10/16 01:03:49 Constraint_acquis: >= 1.0, < 2.0
crowdsec-firewall-bouncer-iptables version:
version: v0.0.28-debian-pragmatic-af6e7e25822c2b1a02168b99ebbf8458bc6728e5
BuildDate: 2023-10-02_11:37:45
GoVersion: 1.20.1
bouncer config:
mode: iptables
update_frequency: 10s
log_mode: file
log_dir: /var/log/
log_level: info
log_compression: true
log_max_size: 100
log_max_backups: 3
log_max_age: 30
api_url: http://127.0.0.1:8888/
api_key: xxxxxxxxxxxxxxxxxxxxxxxxx
insecure_skip_verify: false
disable_ipv6: false
deny_action: DROP
deny_log: false
supported_decisions_types:
- ban
#to change log prefix
#deny_log_prefix: "crowdsec: "
#to change the blacklists name
blacklists_ipv4: crowdsec-blacklists
blacklists_ipv6: crowdsec6-blacklists
#type of ipset to use
ipset_type: nethash
#if present, insert rule in those chains
iptables_chains:
- INPUT
# - FORWARD
# - DOCKER-USER
## nftables
nftables:
ipv4:
enabled: true
set-only: false
table: crowdsec
chain: crowdsec-chain
priority: -10
ipv6:
enabled: true
set-only: false
table: crowdsec6
chain: crowdsec6-chain
priority: -10
nftables_hooks:
- input
- forward
# packet filter
pf:
# an empty string disables the anchor
anchor_name: ""
prometheus:
enabled: false
listen_addr: 127.0.0.1
listen_port: 60601
crowdsec config:
common:
daemonize: true
log_media: file
log_level: info
log_dir: /var/log/
log_max_size: 20
compress_logs: true
log_max_files: 10
working_dir: .
config_paths:
config_dir: /etc/crowdsec/
data_dir: /var/lib/crowdsec/data/
simulation_path: /etc/crowdsec/simulation.yaml
hub_dir: /etc/crowdsec/hub/
index_path: /etc/crowdsec/hub/.index.json
notification_dir: /etc/crowdsec/notifications/
plugin_dir: /usr/lib/crowdsec/plugins/
crowdsec_service:
#console_context_path: /etc/crowdsec/console/context.yaml
acquisition_path: /etc/crowdsec/acquis.yaml
acquisition_dir: /etc/crowdsec/acquis.d
parser_routines: 1
cscli:
output: human
color: auto
db_config:
log_level: info
type: sqlite
db_path: /var/lib/crowdsec/data/crowdsec.db
use_wal: true
#max_open_conns: 100
#user:
#password:
#db_name:
#host:
#port:
flush:
max_items: 5000
max_age: 7d
plugin_config:
user: nobody # plugin process would be ran on behalf of this user
group: nogroup # plugin process would be ran on behalf of this group
api:
client:
insecure_skip_verify: false
credentials_path: /etc/crowdsec/local_api_credentials.yaml
server:
log_level: info
listen_uri: 127.0.0.1:8888
profiles_path: /etc/crowdsec/profiles.yaml
console_path: /etc/crowdsec/console.yaml
online_client: # Central API credentials (to push signals and receive bad IPs)
credentials_path: /etc/crowdsec/online_api_credentials.yaml
trusted_ips: # IP ranges, or IPs which can have admin API access
- 127.0.0.1
- ::1
# tls:
# cert_file: /etc/crowdsec/ssl/cert.pem
# key_file: /etc/crowdsec/ssl/key.pem
prometheus:
enabled: false
level: full
listen_addr: 127.0.0.1
listen_port: 6060
If you need me to provide something more, just ask.
Thank you, I will look at it as soon as I get some time.
Ok, I managed to reproduce the issue. I repaired a new prerelease (v0.0.29-rc1) in order to get it tested.
I'd also like to add, that if the bouncer is running natively on the host and the crowdsec engine is in docker, the bouncer service will fail since it tries to start before docker and there's no API available at that time. Here are the adjustments I've made to the service file to mitigate that scenario:
[Unit] Description=The firewall bouncer for CrowdSec After=network.target remote-fs.target nss-lookup.target crowdsec.servicde docker.service Before=netfilter-persistent.service ConditionPathExists=!/var/lib/crowdsec/pending-registration
[Service] Type=notify Restart=always RestartSec=5 ExecStart=/usr/bin/crowdsec-firewall-bouncer -c /etc/crowdsec/bouncers/crowdsec-firewall-bouncer.yaml ExecStartPre=/usr/bin/crowdsec-firewall-bouncer -c /etc/crowdsec/bouncers/crowdsec-firewall-bouncer.yaml -t ExecStartPost=/bin/sleep 0.1
[Install] WantedBy=multi-user.target
I would like to ask if any updates are planned to fix this, been experiencing the same issue as well and as a result has caused my cifs mounts to break. The only workaround that worked was uncommenting Before=netfilter-persistent.service
Hello,
I use the bouncer in ipset mode since I'd like to control where inside my firewall rules the crowdsec lists are checked.
I have netfilter-persistent installed including iptables-persistent and ipset-persistent to load my own firewall rules including the rule referring to the crowdsec ipsets, as well as the initial creation of the empty ipsets for crowdsec.
Now after reboot, I see errors in cs-firewall-bouncer log that the ipsets did not exist, and the sets themselves are showing empty. This is because the bouncer service is set to start before netfilter-persistent service, which is the one creating the ipsets. To fix this, the cs-firewall-bouncer service should be set to start after netfilter-persistent, not before.
I saw that the Before-dependency was added in #168 reasoning that netfilter-persistent failed because the ipsets created by crowdsec were missing. However, I believe this was because the ipset-persistent plugin was not installed. If it is installed, then ipsets are persisted along with the netfilter rules upon
netfilter-persistent save
.