Closed jaydee73 closed 6 months ago
You might need to allow connections to 127.0.0.1:53 (localhost) in your unbound.conf
as well, please see the Troubleshooting section also for more information @jaydee73:
This not only applies to the healthcheck:
When the extended healthcheck fails telling connection to 127.0.0.1 refused, verify that you permit connections to localhost. There are multiple places in the unbound.conf where this could be disabled, from access control to DoNotQueryLocalhost and so on. You'll likely need to check the whole file. Isn't that alone one good reason for the concept with the separated config directories? If you can't find the culprit, don't hesitate giving me a shout.
Edit: more concise answer.
Thanks for the quick update. I have already read the troubleshooting section and recognized this hint regarding healthchecks. But as you are right, I haven't related this to my problem.
But one question: I am by far no expert in Docker environments, but I assume that my error already occurs when starting the container. Therefore I thought this is an error because of communication host (Synology) <-> container. I further thought that the healthcheck problem happens within the container, but doesn't prevent the container from spinning up at all.
Anyway, I will check the complete unbound.log this evening. As already said, I am no expert, so it will be little try&error, which places I have to check. But thankfully you already pointed out two places (access control, DoNotQueryLocalhost).
My pleasure!
Apologies, it's not completely clear to me. Does the container spin up at all? If not, will it when using the 'default' port 5335
?
If the healthcheck reports an issue, my guess is Unbound not accepting locahost connections.
Either way, I need to change this in the documentation and unbound.conf
. Feel free to drop me your configs, I'll take a look.
No, with port 53 in the yaml file AND with a version newer then 1.19.1-0, the container doesen't start at all and throws the port-binding error message.
But with exactly the same configuration and version 1.19.1-0 (or prior) tagged in the yaml file, the container is starting and running without any problems.
I haven't tried this yet with port 5335, will do this this evening.
Also I havent tried if healthcheck is running, because healthcheck isn't my main problem. Until now I haven't used the healthcheck (see my yaml file above).
Please show me your configs, I'll take a look.
This unbound.conf
works in my lab:
Please test, thank you.
Unfortunately, this unbound.conf also doesn't work for me. Still the same error log while spinning up the container (using the "latest" tag):
2024/03/18 16:32:00 stderr Mar 18 16:32:00 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:46 stderr Mar 18 16:31:46 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:46 stderr Mar 18 16:31:46 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:39 stderr Mar 18 16:31:39 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:39 stderr Mar 18 16:31:39 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:35 stderr Mar 18 16:31:35 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:35 stderr Mar 18 16:31:35 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:33 stderr Mar 18 16:31:33 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:33 stderr Mar 18 16:31:33 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:31 stderr Mar 18 16:31:31 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:31 stderr Mar 18 16:31:31 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:30 stderr Mar 18 16:31:30 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:30 stderr Mar 18 16:31:30 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:29 stderr Mar 18 16:31:29 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:29 stderr Mar 18 16:31:29 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/18 16:31:29 stderr Mar 18 16:31:29 unbound[1:0] fatal error: could not open ports
2024/03/18 16:31:29 stderr Mar 18 16:31:29 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
When I try again with the same configuration and only with port changed to 5335 (in yaml and in unbound.conf), container spins up without problems.
I will come back with my config files.
My latest yaml:
version: '2'
services:
unbound:
container_name: unbound_madnuttah
image: madnuttah/unbound:latest
hostname: unbound_madnuttah
domainname: fritz.box
cap_add:
- NET_BIND_SERVICE
ports:
- 53:53/tcp
- 53:53/udp
networks:
macvlan0:
ipv4_address: 192.168.178.226
environment:
TZ: "Europe/Berlin"
ServerIP: 192.168.178.226
volumes:
- ./unbound/unbound.conf:/usr/local/unbound/unbound.conf:rw
restart: always
networks:
macvlan0:
name: macvlan0
external: true
unbound.conf used from your posting above.
While doing some self-research, I found this from another docker-repo: https://github.com/klutchell/unbound-docker/issues/350
They are talking about 1.19.0 and 1.19.1 and the problem seems to be similar to my problem. I assume the other maintainer is using the same version numbering as you do, since it is coming from NLNetLabs?
Even it's not the root cause, you don't need to specify ports in a pure MACVLAN setup in your docker-compose file. Also I'm using compose version: '3'
.
Will the container start using another network mode with port 53? I just tested it with the unbound.conf
which I've provided with a MACVLAN compose including the cap and the container spun up without any issues pulling root keys.
netcat
test:
myotherhost:~# nc -zv 172.16.0.53 53
172.16.0.53 (172.16.0.53:53) open
Please ramp up the verbosity, may we can see something fishy.
The problem you've mentioned doesn't apply to my image. The ports ain't hardcoded. :)
Did you use a different container before? If so, did you perform a docker-compose down && docker-compose up -d
? before switching?
I'm sorry for giving you a hard time....unfortunately nothing changed (=still port bind error).
What completely confuses me is, that starting the container with an older version of the image (but the same config files!), is working without any problems. So could this really be a problem of my config files and/or my network configurations?
You are speaking from your lab environment. If I'm not wrong, you said in the other issue that you also do have a Synology NAS? Did you tried this on your NAS?
Setting verbosity to 4 leads to nothing really new in the Synology Container Manager log:
2024/03/18 19:53:53 stderr Mar 18 19:53:53 unbound[1:0] fatal error: could not open ports
2024/03/18 19:53:53 stderr Mar 18 19:53:53 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53 (len 16)
2024/03/18 19:53:53 stderr Mar 18 19:53:53 unbound[1:0] debug: creating udp4 socket 127.0.0.1 53
2024/03/18 19:53:39 stderr Mar 18 19:53:39 unbound[1:0] fatal error: could not open ports
2024/03/18 19:53:39 stderr Mar 18 19:53:39 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53 (len 16)
2024/03/18 19:53:39 stderr Mar 18 19:53:39 unbound[1:0] debug: creating udp4 socket 127.0.0.1 53
2024/03/18 19:53:32 stderr Mar 18 19:53:32 unbound[1:0] fatal error: could not open ports
2024/03/18 19:53:32 stderr Mar 18 19:53:32 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53 (len 16)
2024/03/18 19:53:32 stderr Mar 18 19:53:32 unbound[1:0] debug: creating udp4 socket 127.0.0.1 53
And I'm sorry again, but I don't really understand what you mean by "use a different container"? I'm using the Container Manager and do create a new container with every try. After each unsuccessful try, I delete the project/container and start over from scratch.
I'm sorry for giving you a hard time
No, you don't.
What completely confuses me is, that starting the container with an older version of the image (but the same config files!), is working without any problems. So could this really be a problem of my config files and/or my network configurations?
We can't rule this out but we'll make it run for you too.
And I'm sorry again, but I don't really understand what you mean by "use a different container"? I'm using the Container Manager and do create a new container with every try. After each unsuccessful try, I delete the project/container and start over from scratch.
No apologies, please. :)
What I meant with 'different container' is precisely: 'have you used a different image from another maintainer?'
I used the Synology Docker Management thingy before I learned the Doxker cli. Later I set up a dedicated Docker host because the Synology implementation is somehow inconsistent to vanilla docker. I suggest to enable ssh and use the docker cli to handle your compose files. A docker-compose down && docker-compose up -d
will completely remove the stack and recreate it fromscratch removing everything related to the stack. Being honest, I always used the cli to manage my containers. I had strange effects by not completely removing stacks wh2n testing around. It's worth a try!
What I meant with 'different container' is precisely: 'have you used a different image from another maintainer?'
I have tried the mvance image (https://github.com/MatthewVance/unbound-docker/) and the klutchell image (https://github.com/klutchell/unbound-docker). Both on port 53 and both also in a macvlan. And both also via the Syno GUI. ;-) Both are spinning up without problems. But they have some things I don't like, so I came to your image which fits better to my needs. If I can get it going... ;-)
I will try a cli-run of your image later this day.
CLI start ended up with the same error.
jaydee73@nas:/volume2/docker/unbound_madnuttah$ sudo docker-compose down && docker-compose up -d
Warning: No resource found to remove for project "unbound_madnuttah".
[+] Running 1/1
⠿ Container unbound_madnuttah Started 0.6s
jaydee73@nas:/volume2/docker/unbound_madnuttah$
Log:
2024/03/19 11:18:51 stderr Mar 19 11:18:51 unbound[1:0] fatal error: could not open ports
2024/03/19 11:18:51 stderr Mar 19 11:18:51 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/19 11:18:38 stderr Mar 19 11:18:38 unbound[1:0] fatal error: could not open ports
2024/03/19 11:18:38 stderr Mar 19 11:18:38 unbound[1:0] error: can't bind socket: Permission denied for 127.0.0.1 port 53
2024/03/19 11:18:31 stderr Mar 19 11:18:31 unbound[1:0] fatal error: could not open ports
Weird, the container spins up with every variant here. I haven't tested it on my Syno, though.
I have found this in a Pi-hole issue: https://github.com/pi-hole/docker-pi-hole/issues/1252
@.... - in order for macvlan to work, the host network interface needs to be promiscuous mode. On the synology command line,
ip -d link
will show "promiscuity 1" if the mode is enabled and will show "promiscuity 0" if not. What is the result for the bond0 interface?
As if something has port 53 in use already. Could you test your NAS' ports without Unbound running using the netcat command just to make sure?
nc -zv YourNasIP 53
Weird, the container spins up with every variant here. I haven't tested it on my Syno, though.
I have found this in a Pi-hole issue: pi-hole/docker-pi-hole#1252
@.... - in order for macvlan to work, the host network interface needs to be promiscuous mode. On the synology command line,
ip -d link
will show "promiscuity 1" if the mode is enabled and will show "promiscuity 0" if not. What is the result for the bond0 interface?
Yes, promiscuity is 1. I've activated it while setting this whole thing up years ago (at first only with pihole, later on I created a macvlan and added unbound to the mix). While setting this up, I used sudo ip link set bond0 promisc on
:
5: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:11:32:d8:7b:30 brd ff:ff:ff:ff:ff:ff promiscuity 1
As if something has port 53 in use already. Could you test your NAS' ports without Unbound running using the netcat command just to make sure?
nc -zv YourNasIP 53
I'm not sure if I get this right. You mean running this command from my desktop machine (a Mac)? This gives me:
stefan@MacBook-Pro-von-Stefan ~ % nc -zv 192.168.178.36 53 nc: connectx to 192.168.178.36 port 53 (tcp) failed: Connection refused
Maybe, but to be honest: I don't think so...I found this. Old thread, but the last post is not that old and quite interesting. But in my Container setup, "execute container with high privilege" already is unchecked and "NET_BIND_SERVICE" is checked. So I assume it's configured correctly.
And again: The container spins up when using an older version, but the same configuration files. So just by changing the "latest" tag. And also other unbound container (klutchell and mvance) are running with port 53. And without any cap add parameters.
May I kindly ask you to try to reproduce this on your own Syno? If it's running there with port 53, when can assume that it's a problem in my environment...
Ok, setting user: root
in the compose makes the container run. There's a difference between vanilla and Synology docker.
And again: The container spins up when using an older version, but the same configuration files. So just by changing the "latest" tag. And also other unbound container (klutchell and mvance) are running with port 53. And without any cap add parameters.
Yes you are absolutely right, I am approaching this different in terms of security. The 'old' versions 1.19.0-x
have been retired.
As I wrote, the image runs on my Docker hosts and for the nice person from issue #54 on non Synology devices by setting the CAP which is also necessary for Pi-hole, by the way (at least on my host) without any problems.
Thank you.
Edit: Just out of curiosity. Why do you insist in using a privileged port when using a Pi-hole in your network? It's just a setting in Pi-hole to port 5335 which you've mentioned you're using as well. Using port 53 would make sense if you'd use Unbound as your primary DNS server instead of Pi-hole.
Ok, setting
user: root
in the compose makes the container run. There's a difference between vanilla and Synology docker.
Yes! user: root did the trick. Container spins up without the port binding error.
More answers later this evening...
Hi @jaydee73, do you need more assistance?
Sorry for being a little late. We went on a holiday trip this morning and I haven't had the time for further tests. But so far so good. Container is running.
Thanks again for all your support.
Again, my pleasure. I'll close this then. Feel free to reopen or submit a new issue when needed.
Have fun with the image.
When running the container with port 53, I'll get a port bind error:
I am running the container in a docker environment on my Synology NAS (DSM7.2) in a macvlan.
I am referencing to this issue, where I already mentioned my problem: https://github.com/madnuttah/unbound-docker/issues/54
I have tried different configurations, all with the same error (also tried the minimal config version). Then I found out "by accident", that older versions of the container indeed do work without the above mentioned error. Every version until 1.19.1-0 is working. Everything newer doesn't work anymore.
My yaml file:
If you need any more informations, please let me now.
Regards, JD