Open luison opened 4 months ago
Interesting, once I get home tonight I'll take a look.
In one of the files should be some configs. Can you paste what you are using so I have the same setup when I test.
I actually need to change those because I think right now they default to manual mode out of the box.
If I remember correctly it may be looking for your network adapter starting with br-
DOCKER_NET_INT=`docker network inspect -f '{{"'br-$bridge'" | or (index .Options "com.docker.network.bridge.name")}}' $bridge`
If you do ifconfig
do you have any network adapters starting with br-
? ipconfig
if on Windows.
I have the following:
br-a58c28cc477d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 af25::15:16ee:dfc2:2bf2 prefixlen 64 scopeid 0x20<link>
ether 05:24:16:f2:2b:c0 txqueuelen 0 (Ethernet)
RX errors 0 dropped 0 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
And finally, what distro are you running.
Thanks. I do, but:
docker network inspect -f '{{"'br-$bridge'" | or (index .Options "com.docker.network.bridge.name")}}' $bridge
"docker network inspect" requires at least 1 argument.
See 'docker network inspect --help'.
Usage: docker network inspect [OPTIONS] NETWORK [NETWORK...]
Can you check my previous reply, I asked about several pieces of info, including what configs you are using in the section
And what OS you are running so that I can test.
Our config is: Linux Debian 12
DOCKER_INT="docker0"
DOCKER_NETWORK="172.17.0.0/16"
NETWORK_MANUAL_MODE=false
NETWORK_ADAPT_NAME="alsur-traefik-proxy" ### irrelevant I understand as no manual network
DEBUG_ENABLED=true
CSF_FILE_ALLOW='/etc/csf/csf.allow'
CSF_COMMENT='Docker container whitelist'
As far, we figured out the error comes at the container loop here:
else
bridge=$(docker inspect -f "{{with index .NetworkSettings.Networks \"${netmode}\"}}{{.NetworkID}}{{end}}" ${container} | cut -c -12)
DOCKER_NET_INT=`docker network inspect -f '{{"'br-$bridge'" | or (index .Options "com.docker.network.bridge.name")}}' $bridge`
ipaddr=`docker inspect -f "{{with index .NetworkSettings.Networks \"${netmode}\"}}{{.IPAddress}}{{end}}" ${container}`
fi
bridge=$(docker inspect -f "{{with index .NetworkSettings.Networks \"${netmode}\"}}{{.NetworkID}}{{end}}" ${container} | cut -c -12)
returns blank in some cases, due to the previous command:
netmode=docker inspect -f "{{.HostConfig.NetworkMode}}" ${container}
which in our case in some scenarios is the network name (works fine) in others is a hash. This produces that "NetworkSettings.Networks \"${netmode}" will not be found, as this is always the network name. Uncertain why.
Sample:
"HostConfig": {
....
"NetworkMode": "portainer_default",
"HostConfig": {
...
"NetworkMode": "423a35100f0eb2e6ae4c944a2b741907b68c9de237bf72e2863e1fb32484ec78",
And therefore the variables there do not get correctly set for some containers.
I hope that makes sense.
Yeah this makes sense.
At present, I'm sort of "re-working" the script. It was sort of a quick slap together, but there are some easier ways to do this such as merging all of the steps together and making a simple single file to run, as well as error messages when certain things aren't configured properly.
With this, I'll look through the issue above and see if I can clean it up and make it work better.
Alright, I see what's going on. Manual mode works fine. The error is thrown when manual mode is turned off.
Need to re-organize some stuff.
Alright, the scripts have been dramatically changed.
I need to do some more testing tomorrow, but currently, it works for me in and out of manual mode.
Once the scripts are installed, you can also track what is added as the scripts do their thing if you run
sudo csf -r
This should re-add the rules again, but spit out the steps.
Keep in mind however, when I re-did the script today, I added a bunch of automation to make things easier. So when the install.sh
script is ran; it's going to first check that CSF is installed on your system. If the csf
command returns a valid response; then it will skip. So if you have it installed right now, it shouldn't do anything.
However, if CSF is not installed; it will download the latest version and install the required packages used with CSF.
I updated the README, so it should explain everything.
Also added OpenVPN server support as another script in the /patch
folder.
Thanks. I will be giving them a try but at the moment we are actually looking into alternative firewalls, unless we can solve the open ports issue with Docker Networking on one hand and the source IP on container level, which for some reason we loose too when implementing CSF and manual rules.
The first we think we can solve since we've been routing NAT traffic through CSF with a set of rules passing through CSF LOCALINPUT chain and we hope we can emulate that. The second one we are completely unclear why.
It would be great if you can clarify how indispensable it is to install your "csf" copy. I mean, as far as I understood, your modifications come from the csfpost/pre processes, so they should work on a standard CSF installation. Is that so?
I am still extremely surprised that the firewall + Docker networking is so complicated and with such a lack of alternatives.
On the other hand, also concerned about the current development situation of CSF itself.
It would be great if you can clarify how indispensable it is to install your "csf" copy. I mean, as far as I understood, your modifications come from the csfpost/pre processes, so they should work on a standard CSF installation. Is that so?
If you're asking if you need to install the copy of CSF I have provided in this repo, no. You can download CSF straight from the website.
I provide a copy of CSF because it just makes my life easier. When I create a new build of the scripts, my workflow automatically grabs the latest version of CSF from the developer server, finds the version number in the version.txt, appends the version number to the .tgz archive and then adds it as an artifact along with the patches zip. My copy of the csf .tgz is exactly what the developer also publishes, with no edits.
That's why I also outlined that several times in the readme:
This repository contains several folders:
📁 csf_install: The latest version of ConfigServer Firewall
Can also be downloaded from the [Releases Page](https://github.com/Aetherinox/csf-firewall/releases). Each release of these patches will include the most current version of ConfigServer Firewall as a .tgz file.
>> You do not need this if you already have CSF installed on your system.
What matters is you installing the *patches-zip
, and having the pre/post script and the docker.sh script in the post.d
folder.
I can probably kill the csf_install
folder now since it's added to releases and clean up the repo a bit.
I am still extremely surprised that the firewall + Docker networking is so complicated and with such a lack of alternatives.
Sadly, this is also the case with UFW (which are basically just more simple versions of iptables). There have been numerous scripts published which do the same things this repo does, except for UFW instead of straight iptables.
On the other hand, also concerned about the current development situation of CSF itself.
From what I understand, CSF is still actively maintained, and is also a big part of the cPanel + WHM combination. If you see CSF being removed from WHM as an automatic install; then that's the moment you should be concerned that development is dead. Right now CSF is pretty stable, and contains a lot of the functionality that a firewall should have with a GUI.
Sure, there are some extra things that would make life easier, but that developer also works on several other projects; and he has a paid service. But for as long as I've ran CSF; I haven't found anything that is massively needed on my end. I've had docker, traefik, and CSF running now for about a year; and it has been flawless.
which for some reason we loose too when implementing CSF and manual rules.
When you say lose, do you mean that you add the firewall rules, but then they disappear from iptables? Have you tried installing / using the iptables-persistent
package?
With the script I provide, every single time you start CSF; my patch re-adds the iptable rules to your table. However, if you manually add new iptable rules; they will be lost when you restart the system, shut down iptables, or clear the table.
However, if you use the iptables-persistent
package; they will be saved and re-applied should something happen.
After you install the package, run
sudo iptables-save > /etc/iptables/rules.v4
sudo ip6tables-save > /etc/iptables/rules.v6
Your iptables will be saved to
/etc/iptables/rules.v4
/etc/iptables/rules.v6
Alternatively, you can use the commands:
sudo /etc/init.d/iptables-persistent save
sudo /etc/init.d/iptables-persistent reload
What you could do is bring docker down completely, and CSF; then wipe your iptables, completely. The latest version of my install.sh includes a --flush
argument to do this:
./install.sh --flush
Or manually, you can run the commands below:
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X
Then add any custom rules you need that should be persistent, and then save them to file with iptables-persistent
sudo /etc/init.d/iptables-persistent save
After you have the rules in place you need, you can bring docker, csf/lfd back up, and then let my scripts automatically add their rules. Reason for that is that my script is dynamic, and will add new rules based on any new containers you add to your docker install. So if you run docker + my script first, and then use the persistent command, you'll save my rules too. But then they'll be added again once you restart csf, along with any new rules if you add new containers.
That is the major complaint of iptables; it has no way of determining if a rule is added multiple times, it will allow the same rule to be added over and over again, and you could end up with an iptables list that has 10 of the same rule; which just makes it cluttered.
I went ahead and removed the csf_install
folder from the repo, since it's added to the release. Removes confusion. And moved the csf.conf to the extras
folder.
I finally had time to test your new config and unfortunately errors remain when executing post rules. A brief list, but perhaps they are all derived from the first one. Please note this is a manually created network:
+ CONTAINERS Configure containers
/traefik-whoami NAME /traefik-whoami-1
CONTAINER 585a1ee5e8d2
NETMODE b45330692bf543d377ee305a20dc3bb804ded12774525ff5b2574b2761cea4f4
NETWORK alsur-traefik-proxy
BRIDGE b45330692bf5
DOCKER_NET br-b45330692bf5
IP 172.31.0.3
/xxxxx_traefik_ NAME /xxxx_traefik_traefik
CONTAINER 1dc11c43e4ed
NETMODE b45330692bf543d377ee305a20dc3bb804ded12774525ff5b2574b2761cea4f4
NETWORK xxxxx-traefik-proxy
socket_proxy template parsing error: template: :1: unterminated quoted string
"docker network inspect" requires at least 1 argument.
See 'docker network inspect --help'.
Usage: docker network inspect [OPTIONS] NETWORK [NETWORK...]
Display detailed information on one or more networks
template parsing error: template: :1: unterminated quoted string
BRIDGE
DOCKER_NET
IP
SOURCE 10.0.0.141:80
DESTINATION 80/tcp
iptables v1.8.9 (nf_tables): host/network `' not found
Try `iptables -h' or 'iptables --help' for more information.
iptables v1.8.9 (nf_tables): host/network `' not found
Try `iptables -h' or 'iptables --help' for more information.
+ RULE: -A DOCKER -d /32 ! -i -o -p tcp -m tcp --dport 80 -j ACCEPT
+ RULE: -t nat -A POSTROUTING -s /32 -d /32 -p tcp -m tcp --dport 80 -j MASQUERADE
Bad argument `tcp'
Try `iptables -h' or 'iptables --help' for more information.
+ RULE: -t nat -A DOCKER -d 10.0.0.141/32 ! -i -p tcp -m tcp --dport 80 -j DNAT --to-destination :80
SOURCE 127.0.0.1:80
DESTINATION 80/tcp
iptables v1.8.9 (nf_tables): host/network `' not found
Then we get one network correctly created but then again:
/traefik-traefi NAME /traefik-traefik-forward-auth-1
CONTAINER 815a7b6440c5
NETMODE b45330692bf543d377ee305a20dc3bb804ded12774525ff5b2574b2761cea4f4
NETWORK xxxx-traefik-proxy
BRIDGE b45330692bf5
DOCKER_NET br-b45330692bf5
IP 172.31.0.2
/portainer_port NAME /portainer_portainer
CONTAINER df92cd5bf16e
NETMODE portainer_default
NETWORK portainer_default
socket_proxy template parsing error: template: :1: unterminated quoted string
"docker network inspect" requires at least 1 argument.
See 'docker network inspect --help'.
Usage: docker network inspect [OPTIONS] NETWORK [NETWORK...]
Display detailed information on one or more networks
template parsing error: template: :1: unterminated quoted string
As explained earlier we are trying alternative solutions/scripts but thought you might be interested in this outcomes Thanks.
As explained earlier we are trying alternative solutions/scripts but thought you might be interested in this outcomes
Yeah that's fine, but at least we'll get these bugs ironed out.
Can you re-paste me what you have now in the docker.sh (docker_int, etc) file at the top? My current settings don't throw this error.
For some reason, it's not fetching the bridge, which means it's not tracking the initial network being used by the container(s).
Going to go plug some values in.
There was a mistake there but we've retried with the same result. Just realized I missed a second error (perhaps due to the first):
BRIDGE
DOCKER_NET
IP
SOURCE 127.0.0.1:80
DESTINATION 80/tcp
iptables v1.8.9 (nf_tables): host/network `' not found
Try `iptables -h' or 'iptables --help' for more information.
iptables v1.8.9 (nf_tables): host/network `' not found
Try `iptables -h' or 'iptables --help' for more information.
+ RULE: -A DOCKER -d /32 ! -i -o -p tcp -m tcp --dport 80 -j ACCEPT
+ RULE: -t nat -A POSTROUTING -s /32 -d /32 -p tcp -m tcp --dport 80 -j MASQUERADE
The settings are:
DOCKER_INT="docker0"
NETWORK_MANUAL_MODE="false"
NETWORK_ADAPT_NAME="traefik-proxy"
CSF_FILE_ALLOW="/etc/csf/csf.allow"
CSF_COMMENT="Docker container whitelist"
DEBUG_ENABLED="true"
# #
# list > network ips
#
# this is the list of IP addresses you will use with docker that must be
# whitelisted.
# #
IP_CONTAINERS=(
'172.17.0.0/12'
)
Please note:
We have a few scripts done parsing docker networks to generate rules that seem to do their job, in case you have interest. To be honest we are still struggling with our objective to avoid that by having the docker open ports in PREROUTING skips all CSF IP white and blacklist. Trying to make sure that traffic gets route via the FORWARD chain LOCALINPUT that CSF handles.
/sbin/iptables -I FORWARD -i ${DOCKER_NET_INT} -o ${DOCKER_NET_INT} -d ${ipaddr} -j LOCALINPUT
We also, for some reason, loose source IP at Docker level.
Dumb question. On the items that are erroring, you wouldn't happen to have multiple network adapters assigned to that container would you?
I got some time, so I went through the code, and I noticed something I should have caught earlier. It was even doing it on mine.
Containers with multiple networks, such as the Mailu SMTP server would error because I have that returning a newline list of network adapters. So if a container had multiple networks, it would error out because it was reading the entire list of networks, instead of line by line.
Because when I compare your errors to mine, it errors in the exact same spot. I patched that bug, but I'm also going to throw some other debug proof prints in to at least track these down if they occur.
Also, NETWORK_MANUAL_MODE
is gone now. The script auto detects all the network adapters and their IPs. No need to manually adjust them. As well as NETWORK_ADAPT_NAME
.
Alright. I've tested the patches on my own setup, and I've also tested on a secondary machine fresh installed. Since manual mode is gone, there's no need to mess with it anymore. It'll auto fetch all your containers, grab the network adapters, and start adding the IPs based on each container's assignment.
Even when a container has multiple network adapters assigned to it. The patch will include all depending on their settings.
Thanks for the new version. Tested and working great! 🔥
@Aetherinox Only a little warning I have on Debian 10 (Buster):
# Warning: iptables-legacy tables present, use iptables-legacy to see them
Is it possible to uninstall iptables-legacy to remove that warning?
I am not sure whether CSF supports nftables. Without the iptables-legacy (iptables package) the CSF would not work, right?
Good to hear. Progress is nice. At least the manual mode stuff is gone now. One less thing for users to worry about.
Usually that error appears is because iptables has found that one of 5 legacy modules needs to be loaded:
At present, CSF is not utilizing the netfilter structure (see: https://forum.configserver.com/viewtopic.php?t=10795)
This is where I've sort of come to a cross road. Obviously CSF still gets updates, granted, they are very small updates that don't do a lot of changes.
I've debated on branching CSF off into my own version, and doing the needed changes to update CSF to utilize said functionality. As mentioned elsewhere, Ubuntu 20.10 decided to migrate completely over to nftables and do away with iptables, coupled with the fact that nftables brings a lot more to the table that I'd rather CSF be using.
I'm on ubuntu 24, and it annoys me that I still need iptables-legacy installed. CSF is a great asset, but there's a few places that need an overhaul. And I don't see the original developer doing it anytime soon.
So it may be something I debate on doing, and would at least make life easier for patches I've made such as the Dark Theme. Sadly until then, we're stuck with legacy.
I see, so if I understand correctly, it is not currently possible to get rid of the warning? Would blacklisting or removing modules using rmmod fix the warning, or CSF would fail to set iptables rules altogether?
I've been reading through the CSF rules, and I noticed in v21.19, the developer added a section which converts iptables-legacy over to iptables-nft
The check in CSF is looking specifically for /usr/sbin/iptables-nft
. try the command:
sudo iptables-nft --list
And see if you get a response and list of rules.
Finally, when you get the error
# Warning: iptables-legacy tables present, use iptables-legacy to see them
Can you print me the surrounding lines around the error? It appears when I run it on my two servers, I don't have any legacy iptables.
It you run the following command, it should spit out what rules are using legacy structure:
sudo iptables-legacy --list
Mine currently looks like:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
You can also spit them out to a file using
iptables-legacy-save > iptables-legacy-save.txt
ip6tables-legacy-save > ip6tables-legacy-save.txt
If you use the legacy list command, it should show specifically what rules are legacy. If they are custom rules you've added, you can convert them over to nftables using (random rule example here)
iptables-translate <rule>
iptables-translate -A DOCKER -d 172.18.0.3/32 ! -i br-a58c28cc477d -o br-a58c28cc477d -p tcp -m tcp --dport 995 -j ACCEPT
Which will spit out the nft equivalent you can add as a rule:
nft 'add rule ip filter DOCKER iifname != "br-a58c28cc477d" oifname "br-a58c28cc477d" ip daddr 172.18.0.3 tcp dport 995 counter accept'
Obviously CSF's usage of converting is better than nothing, but there are still a few places where things could be improved. Yet another thing I can add to my list. I'd much rather just straight up use nftables instead of the iptables-nft wrapper.
Hi, thanks much for the information!
When I execute this command:
iptables-nft --list
I get:
# Warning: iptables-legacy tables present, use iptables-legacy to see them
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere anywhere
LOCALINPUT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
INVALID tcp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DROP icmp -- anywhere anywhere icmp echo-request
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere ctstate NEW tcp dpt:smtp
ACCEPT tcp -- anywhere anywhere ctstate NEW tcp dpt:http
ACCEPT tcp -- anywhere anywhere ctstate NEW tcp dpt:https
etc..
When I execute this command:
iptables-legacy --list
I get:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I've checked what binaries does the "iptables" package contain using the following command:
dpkg -L iptables
It seems that this package contains:
iptables-legacy
iptables-nft
and others..
When I executed:
ls -l /usr/sbin/iptables
It turned out it is a symlink to:
/usr/sbin/iptables -> /etc/alternatives/iptables
When I further executed:
ls -l /etc/alternatives/iptables
It turned out it is another symlink to:
/usr/sbin/iptables-nft
So I am not sure if I can get rid of that warning at all.
The warning is shown when executing your script:
Running /usr/local/csf/bin/csfpost.sh
+ DOCKER Flushing existing chain DOCKER
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
# Warning: iptables-legacy tables present, use iptables-legacy to see them
And furthermore for each of the container rules:
BRIDGE 145abd8e68d3
DOCKER INTERFACE docker0
SUBNET 172.17.0.0/16
+ RULE: -t nat -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
+ RULE: -t nat -A DOCKER -i docker0 -j RETURN
# Warning: iptables-legacy tables present, use iptables-legacy to see them
+ RULE: -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
+ RULE: -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
BRIDGE e2191ed8a043
DOCKER INTERFACE br-e2191ed8a043
SUBNET 172.21.0.0/16
+ RULE: -t nat -A POSTROUTING -s 172.21.0.0/16 ! -o br-e2191ed8a043 -j MASQUERADE
+ RULE: -t nat -A DOCKER -i br-e2191ed8a043 -j RETURN
# Warning: iptables-legacy tables present, use iptables-legacy to see them
+ RULE: -A FORWARD -o br-e2191ed8a043 -m conntrack --ctstate NEW,RELATED,ESTABLISHED -j ACCEPT
+ RULE: -A FORWARD -o br-e2191ed8a043 -j DOCKER
+ RULE: -A FORWARD -i br-e2191ed8a043 ! -o br-e2191ed8a043 -j ACCEPT
+ RULE: -A FORWARD -i br-e2191ed8a043 -o br-e2191ed8a043 -j ACCEPT
+ RULE: -A DOCKER-ISOLATION-STAGE-1 -i br-e2191ed8a043 ! -o br-e2191ed8a043 -j DOCKER-ISOLATION-STAGE-2
+ RULE: -A DOCKER-ISOLATION-STAGE-2 -o br-e2191ed8a043 -j DROP
etc..
And as shown before, also when executing:
iptables --list
So now I am not quite sure whether I can easily remove the "iptables" package just to get rid of that warning.
Furthermore I see that your patch:
What happens when the docker container is restarted or upgraded? Currently I use the Watchtower and once it upgraded the container, I lost access to that container. I had to run:
csf -ra
For nginx proxy to be able to get a direct connection to the container once again.. Is there any hook for docker, which will restart csf automatically, so that upgraded/restared containers work flawlessly?
And last but not least regarding the: "csf.allow" file, I see that the file can get polluted very quickly depending on how many containers are used on the system. Is there any clean-up procedure, so that it does not have thousands of lines after a year? Currently, over 50 lines were added within 2 days..
So three points to discuss here:
Hope we can perfect this script, so it can be used by everyone without any issues!
PS: Now I remember that I also added a systemd docker drop-in: "run-before-csf.conf":
[Unit]
After=
After=network-online.target containerd.service
Before=
Before=csf.service
I don't know if this is needed, but it made starting my containers and WireGuard connection faster on startup..
The warning is shown when executing your script: Now that's interesting. Because those rules get fully converted over to nft before the firewall even starts up, and you have no entries in your legacy table, so I wonder what that is about.
You are running Debian 10 / Buster on this machine correct? I want to bring up a VM and test it out on there. I'm currently on Ubuntu 24.04 LTS.
What happens when the docker container is restarted or upgraded? Currently I use the Watchtower and once it upgraded the container, I lost access to that container. I had to run:
Out of box with ConfigServer Firewall + docker=1
; restarting CSF means that you have to actually restart the docker process itself. Not the containers, but the actual main docker process; which is a royal pain in the rear. I implemented some refresh functionality so that you don't have to restart it any longer.
If you attempted to bring a container up after shutting down / restarting CSF; it would originally throw an error that rules were not available for that container and it would cause the container to not respond to any actions.
For nginx proxy to be able to get a direct connection to the container once again.. Is there any hook for docker, which will restart csf automatically, so that upgraded/restared containers work flawlessly?
Let me guess, you have the docker container automatically assigning IPs correct? You're not using a static IP assigned in the nginx docker-compose.yml
. Because then that would make sense as CSF originally pulls the list of IPs assigned to your containers, and if Nginx is restarting on a new IP; then CSF doesn't know about the new IP assignment. It still thinks you're on the old one.
Let me pop into an install of docker + CSF on Debian 10 and I'll take a look at some type of automatic refresh functionality. Since docker is containerized, it limits what can be done on the host machine (for security reasons). Obviously the simple solution would be to create a systemd job which runs through your containers each day. But that presents other issues like tags.
I usually advise people against using the :latest
tag, simply because if a breaking change happens, it messes the user up to where they're having to back-track to see what broke their container.
Of course the user could have a bash script to check the current image hash against the latest using something such as:
docker inspect your-container -f '{{index .Config.Labels "com.docker.compose.config-hash"}}'
Another option is also via a script, but with the use of the Portainer API.
This is a good example (or argument) for me wanting to actually write my own version of CSF, vs just these patch scripts. Because actually changing the functionality of CSF itself gives me much more vast abilities. Docker service monitoring could then be built in, as well as a complete migration of nftables.
And last but not least regarding the: "csf.allow" file, I see that the file can get polluted very quickly depending on how many containers are used on the system. Is there any clean-up procedure, so that it does not have thousands of lines after a year? Currently, over 50 lines were added within 2 days..
This one is fair to think about. I can come up with something for this so that it's automated. I just have to differentiate between a custom added rule by you, and one added by the system. Then on CSF startup, I can just clean up the list of the ones added by the system, leaving only the custom user additions.
I'll come up with something for this once I get a debian install going.
The only concern I have about this is that /etc/csf/csf.allow is usually owned by root. And trying to automate this without the need for a sudo password every time. When instead, editing the CSF app itself would solve the issue.
You are running Debian 10 / Buster on this machine correct? I want to bring up a VM and test it out on there. I'm currently on Ubuntu 24.04 LTS.
Yes, I use Debian 10 Buster. Will upgrade to Debian 12 once I have more free time..
If you attempted to bring a container up after shutting down / restarting CSF; it would originally throw an error that rules were not available for that container and it would cause the container to not respond to any actions.
I am aware of this. Your patch is working very well!
Let me guess, you have the docker container automatically assigning IPs correct? You're not using a static IP assigned in the nginx docker-compose.yml. Because then that would make sense as CSF originally pulls the list of IPs assigned to your containers, and if Nginx is restarting on a new IP; then CSF doesn't know about the new IP assignment. It still thinks you're on the old one.
I have nginx on host. Running pi-hole, watchtower and several other containers in the docker.
This is a good example (or argument) for me wanting to actually write my own version of CSF, vs just these patch scripts. Because actually changing the functionality of CSF itself gives me much more vast abilities. Docker service monitoring could then be built in, as well as a complete migration of nftables.
I see, this is probably the best approach. I checked Docker website and found that hooks are not implemented and probably never will be:
So one option is maybe to use Watchtower hooks (if there are any).. In case of manual restart, one could simply run your patch or execute:
csf -ra
This one is fair to think about. I can come up with something for this so that it's automated. I just have to differentiate between a custom added rule by you, and one added by the system. Then on CSF startup, I can just clean up the list of the ones added by the system, leaving only the custom user additions.
This could be done by awk/sed I guess.. In your script there is a variable for description, so this could be leveraged IMHO..
I'll come up with something for this once I get a debian install going.
Thanks!
The only concern I have about this is that /etc/csf/csf.allow is usually owned by root. And trying to automate this without the need for a sudo password every time. When instead, editing the CSF app itself would solve the issue.
When running:
csf -ra
..in the console, I guess sudo/root is used anyway even for executing your docker patch, so this should be fine I guess.
Thanks!
I've finally had a chance to test your updated script and yes it works correctly for us too and yes, we do have various networks on some containers... we systematically do this to limit access from one container to others and to isolate the traefik public nets from the rest. Thanks.
We might use partly your script, but our key reason to use it initially was to solve the issue of docker not respecting CSFs port/ips limitations. We basically do not want to allow docker to open any port by mistake with say a 8081:8081 port mapping on all IPs if CSF did not allow access to that port.
After (too) many hours we understood that this is a (major to me) bad design of docker that does all of its processing in the prerouting chain which CSF does not limit in any way.
To solve/fix it we have created a "post" script that creates an early chain in PREROUTING before dockers one and will only forward to the DOCKER chain those ports and IPs that matches certain rules. It also processes the csf.allow to accept those. The rest of traffic will just EXIT the prerouting chain and would not reach its destination.
I understand you also parse the csf.allow file but I am still not certain with what objective.
We will try to execute this "after" your docker.sh and see how it goes. If you do have an interest, we can inform of our findings in case you want to incorporate a similar behaviour to your script. From my point of view this is the major issue to be resolved when running docker on csf firewalled system.
We don't really need it now, as in our hypervisor setup we've discovered the way to do something similar at host level, so our ports and ips are already limited there. In that case, we limit traffic at the FORWARD (filter) chain (to the VM interface) and being so, we can push everything to CSFs LOCALINPUT chain that handles whitelist ips (csf.allow), dynamic ips (csf.dyndns) and blocked ips. This chain can not be used at PREROUTING level unless recreated.
In any case we will try to test this alternative for more generic cases.
Thanks for the update.
We will try to execute this "after" your docker.sh and see how it goes. If you do have an interest, we can inform of our findings in case you want to incorporate a similar behaviour to your script.
If you wouldn't mind, yes, that would be great.
As I was mentioning to the other user, I've been debating in the back of my head what the next major step would be.
I like CSF, but I feel there are numerous areas where CSF needs improved, both for functionality, and also from a sense of security / ease of access, and updating the standards.
CSF has a lot of nice functionality, with even more potential, it just needs to be updated.
As I mentioned to the other user as well, a primary goal would be to move CSF away from iptables, and actually use nftables instead. Yes it's technically using nftables due to iptables having a wrapper, but I'd like to walk away from iptables completely.
Ok further to my last comment I can confirm we implemented both logics correctly. We are using your docker.sh script and later execute our port blocking script. In case of interest, I will try to explain our strategy for that with the generated rules:
We first create a chain that will be the first to be executed in the NAT table, that chain is the one that will determine if any traffic will reach the DOCKER chain (generated before by your docker.sh) or not and this way we can limit open ports.
:DOCKER - [0:0]
:DOCKER-BLOCK - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "Sends first all NAT traffic to our own chain: DOCKER-BLOCK" -j DOCKER-BLOCK
-A PREROUTING -m addrtype --dst-type LOCAL -m comment --comment "Exits prerouting after DOCKER-BLOCK" -j RETURN
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
our DOCKER-BLOCK chain looks something like this:
-A DOCKER-BLOCK -s 79.xxx.xx.95/32 -m comment --comment "allow all docker NATing from that IP" -j DOCKER
-A DOCKER-BLOCK -s 10.0.0.1/32 -m comment --comment "Allow all docker NATing from local network" -j DOCKER
-A DOCKER-BLOCK -m set --match-set csf_allowed_ips src -m comment --comment "All IPs in csf.allow will be permited too - IPSet generated from csf.allow" -j DOCKER
-A DOCKER-BLOCK -p tcp -m tcp --dport 80 -m comment --comment "Allows docker NATing traffic to port 80" -j DOCKER
-A DOCKER-BLOCK -p tcp -m tcp --dport 443 -m comment --comment "Allows docker NATing traffic to port 443" -j DOCKER
Like this, any public open port created in docker or docker compose (ie 8081:8081) will first have to be allowed in this chain for external traffic reaching it.
I am not really sure for what you use csf.allow in your script, but we parse its content to create an IPSet to allow it for our host permitted IPs.
A bit of a dirty trick... but works for those (like us) not liking that docker rules over open ports and ignores CSF completely. It could be improved by parsing csf.conf for open ports, include dynamic ips, etc but...
On the other matter you mention regarding forking CSF I have to admit that over the years, I am less inclined to use forks unless the source is officially dead or unupdated for a while, so in our case we are quite happy as is and to be honest we are mainly using your docker.sh script to avoid docker handling itself iptables so we can have it together with CSF.
Thanks again for your help.
Thanks for this, helps out a great deal. See my comment below about the csf.allow usage. As I like your method for filtering traffic.
I am not really sure for what you use csf.allow in your script, but we parse its content to create an IPSet to allow it for our host permitted IPs.
When docker containers are given access to CSF, a whitelist rule is added to the csf.allow
file. However, as @martineliascz pointed out in our discussion, because some users allow docker to automatically assign a new IP each time the container is spun up, this presents a problem, because your csf.allow file would start to get rather large after a few months of usage and bringing the containers down and back up. So I'm currently working on an edit to the system which will clean out the entries made by the script when CSF is started up. This however, should leave custom entries the user has created alone and not affect them. (Including whatever your scripts also do to modify that file).
I am less inclined to use forks unless the source is officially dead or hasn't been updated in a while
Yeah, I have the same practice as well. Typically I don't use forks unless the original is dead, and the forked one is actively maintained, which is one of the reservations I have about embarking on a fork of my own. On the flip side, there's a few things to CSF that could be really improved, and this is based on both my own personal usage, as well as what I've seen users complain about on the official forums for CSF. So it's sort of a gamble. I spend the time invested in making the app better, or I just let the developer take control of that, and I attempt to fix what I can using scripts.
Added the cleanup functionality. Should keep the allow list from building up from old IPs that have changed.
Thanks, let me test it tomorrow! :)
Sure thing, I'm just cleaning up the repository a bit.
I've set up an automated service which can also be used with CSF. The repository now contains a list of the top abusive IP addresses which are featured on websites like AbuseIPDB, as well as long-term bad actor IPs.
So people can implement those blocks into their firewall rules to cut down on SSH brute-forces, sniffing, etc.
@Aetherinox Hey, does your service use ipset and csf.blocklists?
I have currently these entries in that file to prevent spam:
#AWS|86400|0|https://ip-ranges.amazonaws.com/ip-ranges.json
PROJECTHONEYPOT|86400|0|https://www.projecthoneypot.org/list_of_ips.php?t=d&rss=1
TOREXITNODES|86400|0|https://check.torproject.org/cgi-bin/TorBulkExitList.py?ip=1.1.1.1
MAXMINDGEOIP|86400|0|https://www.maxmind.com/en/high-risk-ip-sample-list
BRUTEFORCES|86400|0|http://danger.rulez.sk/projects/bruteforceblocker/blist.php
SPAMHAUSDROP|86400|0|https://www.spamhaus.org/drop/drop.txt
SPAMHAUSEDROP|86400|0|https://www.spamhaus.org/drop/edrop.txt
CIARMY|86400|0|https://cinsscore.com/list/ci-badguys.txt
BLOCKLISTDE|86400|0|https://lists.blocklist.de/lists/all.txt
GREENSNOW|86400|0|https://blocklist.greensnow.co/greensnow.txt
FIREHOL|86400|0|https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/firehol_level1.netset
STOPFORUMSPAM|86400|0|https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/stopforumspam_7d.ipset
The end answer is multiple.
At present, I've finished the entries for individual blocks which go inside csf.deny
, but I'm also working on a csf.blocklists
version right now as well.
Both will support 100% confidence hits from abuseipdb and ipthreat.net. I'm also looking at adding the possibility of MaxmindGeoIP, but I need to write up the workflows for that first to fetch the proper entries.
I use Maxmind myself, but I need to go read up on the API limits, so that I can pull them and ensure they're constantly updated. I forget how often Maxmind updates.
Then at the end, you can just include the github file in your list (much like the one you have for FIREHOL
).
The basic csf.deny
version right now is set to update daily (every 6 hours). The only downside to people wanting to use the csf.deny
version is having to add do not delete
to the comment of each line. Not my favorite, but that won't be the case with the other method.
But in short, the csf.deny
version should not be the version people use, as it creates a substantial amount of iptable rules. I don't know why some people insist on using that file as their primary means of blocking, but the other way will be the preferred way. You'll just drop my URL in your csf.blocklists
and you're good to go.
Edit: Actually, the more I think about it. I may just not do the csf.deny
method and skip straight to using the blocklists
, because if I offer up a csf.deny
, someone is going to use it, and it's just going to create performance issues for them.
Yeah, this sounds great.
Is your block list periodically updated?
I had to disable #AWS list, since I wasn't receiving emails nor was able to connect to sites hosted on Amazon..
Yes, every 6 hours. I'm in the middle of re-working it to remove the #do not delete
comments.
In terms of AWS, that's another thing I've debated. I may break those up into different files, so you can pick what you want to use.
URLs may change. I'll throw a message once I'm done.
Great. Thx!
Alright, one final major change.
The blocklist is now compatible with the CSF blocklist feature. The new URL is:
It's also compatible with pihole, and the other major ones.
The rules with AWS have been removed, they'll be added to another list. The IPs in this master list are strictly ssh bruteforce and port scanners ONLY. The master list should only ever have IPs which nobody would need to edit or remove some from (unless your own IP is in there).
AWS and the other privacy rules have been moved to
Updates every 6 hours.
CSFMaster|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/01_master.ipset
The header at the top of the files contains the count, and update time
# #
# 🧱 Firewall Blocklist - 01_master.ipset
#
# @url https://github.com/Aetherinox/csf-firewall
# @updated Mon Oct 21 23:05:04 UTC 2024
# @entries 142563
# @expires 6 hours
# @category full
#
# auto-generated list which contains the following:
# - AbuseIPDB 100% Confidence
# - IPThreat.net 90% Confidence
# - Port scanners
# - SSH bruteforce attempts
# #
Now comes the fun part where I have to write scripts to grab all the different types of IPs.
Alright, I've finished cleaning things up and getting the base finished. All of the blocklists can be found at:
Services like AWS are broken up into their own files, this also includes:
You can pick whatever privacy rulesets you want. Check back from time to time, I'm adding more.
The only major thing I need to do is break up some of these into different timers / crons. Right now, all lists are updated every 6 hours. Which is good for private / vps ip addresses, but isn't needed for things like Amazon / Bing, since their IP addresses don't change much. Those should be on a once per 7 day cycle. But it's not a big deal.
This looks great!
So if I understand it correctly, I can add all the blocklists highlighted in green, right?
Yes, you can add any of them. In the readme, I listed each one, and added a "Recommended" indicator.
The biggest ones are
The others are based on your preferences. Cloudfront and Fastly are a CDN, I haven't seen too many servers owned by them trying to hit other servers, they're mostly a CDN.
Amazon AWS / EC2, since they are types of VMs, are used by clients to do things such as port scanning / ssh bruteforce. Surprisingly, on my own setup, Amazon is who I get hit by the most. A note that the AWS and EC2 server IPs should not interfere with legitimate root Amazon servers which perform services. These are only rented boxes.
Google and Bing lists will kill crawlers for those search engines, so if you don't want your site crawled, you should block those two.
And these are all official lists. I get them directly from the companies. Currently working on adding Spamhaus. Just need to add some new functionality in place to break up the subnets and count them for the header.
Alright, the ipset system had another overhaul today.
Now the header of each file gives a description of what they do. There's also statistics in the header.
# #
# 🧱 Firewall Blocklist - 03_spam_spamhaus.ipset
#
# @url https://github.com/Aetherinox/csf-firewall
# @id 03_spam_spamhaus_ipset
# @updated Wed Oct 23 11:43:13 AM UTC 2024
# @entries 1,356 lines
# 1,356 subnets
# 15,956,480 ips
# @expires 6 hours
# @category Spam
#
# No description provided
# #
Also added 03_spam_spamhaus
. Is updated directly from spamhaus' official list.
Getting the actual IPs was the easy part. Automating all the counting and template headers is what ate the time. I'll continue to add more.
Cool! So which lists I should actually keep when using yours? Are those lists included in your lists as well?
#AWS|86400|0|https://ip-ranges.amazonaws.com/ip-ranges.json
PROJECTHONEYPOT|86400|0|https://www.projecthoneypot.org/list_of_ips.php?t=d&rss=1
TOREXITNODES|86400|0|https://check.torproject.org/cgi-bin/TorBulkExitList.py?ip=1.1.1.1
MAXMINDGEOIP|86400|0|https://www.maxmind.com/en/high-risk-ip-sample-list
BRUTEFORCES|86400|0|http://danger.rulez.sk/projects/bruteforceblocker/blist.php
SPAMHAUSDROP|86400|0|https://www.spamhaus.org/drop/drop.txt
SPAMHAUSEDROP|86400|0|https://www.spamhaus.org/drop/edrop.txt
CIARMY|86400|0|https://cinsscore.com/list/ci-badguys.txt
BLOCKLISTDE|86400|0|https://lists.blocklist.de/lists/all.txt
GREENSNOW|86400|0|https://blocklist.greensnow.co/greensnow.txt
FIREHOL|86400|0|https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/firehol_level1.netset
STOPFORUMSPAM|86400|0|https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/stopforumspam_7d.ipset
Thanks!
Let me look through those lists and see if I include any. The only one I know you can get rid of, if you switch to mine, is to delete the two lines:
SPAMHAUSDROP|86400|0|https://www.spamhaus.org/drop/drop.txt
SPAMHAUSEDROP|86400|0|https://www.spamhaus.org/drop/edrop.txt
If you notice the 2nd link and go to it in your browser, it's empty. That's because Spamhaus merged the drop.txt
and edrop.txt
together.
And my Spamhaus ipset is the same one that is in https://www.spamhaus.org/drop/drop.txt
, which is drop and edrop combined. So you can either use theirs or mine.
You can also remove https://danger.rulez.sk/projects/bruteforceblocker/blist.php
from yours. My bruteforce / master list includes all those IP ranges.
The other one I am merging is https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/firehol_level1.netset
That file is a combination of the same Spamhaus rules, as well as the IPs in my master list. There's only a few I'm missing, but they'll be added today.
The other lists I need to review and see where I can merge them at.
I'm in the middle of creating a few rules which will auto-check my ipsets before they are pushed, and ensure there's no doubles. I don't want the same IPs being loaded over and over on a client.
The one in your list that confuses me is https://www.maxmind.com/en/high-risk-ip-sample-list
. That's not a plain file, that's a website, and ConfigServer Firewall can't read HTML and pick out the IP addresses in the HTML. It has to be a plain-text file like the ones I provide. I'll look at that list though and add it to my collection, just need to find the official source.
Edit: I wrote up a script to grab the IPs on that page I listed above on MaxMind. It'll be in a list. Also wrote up the rules to ensure no duplicates exist, as well as a final sorting action to IPs are ascending in the list.
In your list, you can now replace:
MAXMINDGEOIP|86400|0|https://www.maxmind.com/en/high-risk-ip-sample-list
With:
If you also want to remove
https://danger.rulez.sk/projects/bruteforceblocker/blist.php
https://raw.githubusercontent.com/firehol/blocklist-ipsets/master/firehol_level1.netset
https://cinsscore.com/list/ci-badguys.txt
https://blocklist.greensnow.co/greensnow.txt
They are now included in the master list on mine:
So if you want to replace just the ones you already had in your list, you should have:
#AWS|86400|0|https://ip-ranges.amazonaws.com/ip-ranges.json
PROJECTHONEYPOT|86400|0|https://www.projecthoneypot.org/list_of_ips.php?t=d&rss=1
TOREXITNODES|86400|0|https://check.torproject.org/cgi-bin/TorBulkExitList.py?ip=1.1.1.1
CSF|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/01_master.ipset
CSFHIGHRISK|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/01_highrisk.ipset
CSFSPAMHAUS|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/03_spam_spamhaus.ipset
CSFSFORUMSPAM|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/03_spam_forums.ipset
The master list is rather big. Not as big as some others I've seen, but I also go through and ensure duplicates are removed.
# @entries 613,639,126 ips
# 4,147 subnets
# 163,986 lines
Thanks much! I am in hospital now. Will check it once I am back and let you know.
Hope all is well with you. Whenever you get back if you have questions, let me know. I've added some more lists since then.
I recently just added ipsets for all of the countries / continents. So you can block whichever countries you don't want accessing your server.
Hi @Aetherinox! So I am finally back home. All I can say is that your updated docker patch is working perfectly. No more piled up allowed IPs over-time. Thanks very much for this! 🙏
Regarding blocklists, I currently use the ones you suggested:
#AWS|86400|0|https://ip-ranges.amazonaws.com/ip-ranges.json
PROJECTHONEYPOT|86400|0|https://www.projecthoneypot.org/list_of_ips.php?t=d&rss=1
TOREXITNODES|86400|0|https://check.torproject.org/cgi-bin/TorBulkExitList.py?ip=1.1.1.1
CSF|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/01_master.ipset
CSFHIGHRISK|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/01_highrisk.ipset
CSFSPAMHAUS|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/03_spam_spamhaus.ipset
CSFSFORUMSPAM|86400|0|https://raw.githubusercontent.com/Aetherinox/csf-firewall/main/blocklists/03_spam_forums.ipset
They seem to work well. Should I add any others? I did check your repo and saw that you added bots and others.. Furthermore, should I use 01_master.ipset as you suggested or master.ipset which is updated more frequently?
Kindly send me an email as I have a private follow-up question to ask you .. mareliska@gmail.com
@Aetherinox Just one more point .. I see that there is actually one issue:
*ERROR* IPSET: [ipset v7.6: Error in line 65537: Hash is full, cannot add more elements]
How can this be fixed?
Thanks!
@Aetherinox Okay, so I eventually changed:
# Default: "65536"
LF_IPSET_MAXELEM = "200000"
And it seems to solve the issue.
But let me ask you - doesn't this slow down the machine significantly?
Thanks for publishing this. I am trying to get them to work on our system. We had some similar working rules but lacking somehow that the source IP passes to the containers.
While executing csf -e I get the following error from the docker.sh script: