Closed Bouni closed 3 years ago
This isn't a Pi-hole issue, you need to find out how Arch configures things to autostart containers at boot.
@dschaper I think this is indeed a problem with the pihole docker container somehow!
I had pihole in one docker-compose file and several other containers in a second docker-compose file. pihole didn't start on system reboot, all the others did.
So I copied the contents of the pihole compose file to the other. So now all my containers are configured in the same compose file.
Then I did th following steps:
docker-compose up -d
, everything works fine. sudo reboot
.docker-compose ps -a
This is what I see:
Name Command State Ports
-----------------------------------------------------------------------------------------------------------------------------------------
caddy caddy run --config /etc/ca ... Up 0.0.0.0:2019->2019/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
grafana /run.sh Up 3000/tcp
home-assistant /init Up
influxdb /entrypoint.sh influxd Up 127.0.0.1:8086->8086/tcp
mosquitto /docker-entrypoint.sh /usr ... Up 0.0.0.0:1883->1883/tcp, 0.0.0.0:9001->9001/tcp
pihole /s6-init Exit 128
postgresql docker-entrypoint.sh postgres Up 127.0.0.1:5432->5432/tcp
syncthing /init Up 0.0.0.0:21027->21027/udp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:8384->8384/tcp
vikunja-api /run.sh Up 3456/tcp
vikunja-frontend /docker-entrypoint.sh /bin ... Up 80/tcp
wireguard-ui /wireguard-ui Up
All containers start, except for the pihole container.
The docker-compose file looks like this:
version: '3'
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- "8053:8080/tcp"
- "8443:443/tcp"
- "5353:53/tcp"
- "5353:53/udp"
- "192.168.88.23:53:53/tcp"
- "192.168.88.23:53:53/udp"
environment:
- TZ='Europe/Berlin'
- WEB_PORT=8080
- ServerIP=192.168.88.23
- DNS1=46.182.19.48
- DNS2=1.1.1.1
dns:
- 127.0.0.1
- 46.182.19.48
volumes:
- './pihole/etc/pihole/:/etc/pihole/'
- './pihole/etc/dnsmasq.d/:/etc/dnsmasq.d/'
restart: always
caddy:
container_name: caddy
image: caddy
volumes:
- /opt/docker/caddy/Caddyfile:/etc/caddy/Caddyfile
- /opt/docker/caddy/data:/data
- /opt/docker/caddy/config:/config
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "2019:2019"
depends_on:
- pihole
homeassistant:
container_name: home-assistant
image: homeassistant/home-assistant:stable
volumes:
- /opt/docker/homeassistant/config:/config
- /dev/modbus0:/dev/modbus0
environment:
- TZ=Europe/Berlin
restart: unless-stopped
ports:
- 8123:8123
depends_on:
- pihole
- mosquitto
- influxdb
- postgresql
network_mode: "host"
privileged: true
postgresql:
container_name: postgresql
image: postgres:12
volumes:
- /opt/docker/postgresql/initdb.d:/docker-entrypoint-initdb.d:ro
- /opt/docker/postgresql/data:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
environment:
<redacted>
ports:
- "127.0.0.1:5432:5432"
restart: unless-stopped
depends_on:
- pihole
influxdb:
container_name: influxdb
image: influxdb
volumes:
- /opt/docker/influxdb:/var/lib/influxdb
environment:
<redacted>
ports:
- "127.0.0.1:8086:8086"
restart: unless-stopped
depends_on:
- pihole
grafana:
container_name: grafana
image: grafana/grafana
depends_on:
- pihole
- influxdb
volumes:
- /opt/docker/grafana:/var/lib/grafana
user: "1000:1000"
restart: unless-stopped
mosquitto:
image: eclipse-mosquitto
container_name: mosquitto
volumes:
- /opt/docker/mosquitto/config:/mosquitto/config
- /opt/docker/mosquitto/data:/mosquitto/data
- /opt/docker/mosquitto/log:/mosquitto/log
ports:
- "1883:1883"
- "9001:9001"
restart: unless-stopped
depends_on:
- pihole
syncthing:
image: linuxserver/syncthing
container_name: syncthing
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
- UMASK_SET=022
volumes:
- /opt/docker/syncthing:/config
- /storage/syncthing:/data1
ports:
- 8384:8384
- 22000:22000
- 21027:21027/udp
restart: unless-stopped
depends_on:
- pihole
wireguard-ui:
image: embarkstudios/wireguard-ui:latest
container_name: wireguard-ui
restart: always
environment:
<redacted>
volumes:
- /opt/docker/wireguard-ui:/data
ports:
- "8820:8820"
entrypoint: "/wireguard-ui"
privileged: true
network_mode: "host"
depends_on:
- pihole
vikunja-api:
container_name: vikunja-api
image: vikunja/api
environment:
- VIKUNJA_DATABASE_TYPE=sqlite
- VIKUNJA_DATABASE_PATH=/db/vikunja.db
- VIKUNJA_SERVICE_ENABLEREGISTRATION=false
volumes:
- ./vikunja/files:/app/vikunja/files
- ./vikunja/db:/db
restart: unless-stopped
depends_on:
- pihole
vikunja-frontend:
container_name: vikunja-frontend
image: vikunja/frontend
restart: unless-stopped
depends_on:
- pihole
Edit: This is the output of docker-compose logs -f -t pihole
pihole | 2020-09-30T07:06:18.495017442Z [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
pihole | 2020-09-30T07:06:18.720466086Z [s6-init] ensuring user provided files have correct perms...exited 0.
pihole | 2020-09-30T07:06:18.727144922Z [fix-attrs.d] applying ownership & permissions fixes...
pihole | 2020-09-30T07:06:18.738031295Z [fix-attrs.d] 01-resolver-resolv: applying...
pihole | 2020-09-30T07:06:18.781051155Z [fix-attrs.d] 01-resolver-resolv: exited 0.
pihole | 2020-09-30T07:06:18.792002324Z [fix-attrs.d] done.
pihole | 2020-09-30T07:06:18.812022743Z [cont-init.d] executing container initialization scripts...
pihole | 2020-09-30T07:06:18.815606331Z [cont-init.d] 20-start.sh: executing...
pihole | 2020-09-30T07:06:19.001859750Z ::: Starting docker specific checks & setup for docker pihole/pihole
pihole | 2020-09-30T07:06:19.032685336Z Assigning random password: <redacted>
[✓] Update local cache of available packages
pihole | 2020-09-30T07:06:27.224262270Z [i] Existing PHP installation detected : PHP version 7.0.33-0+deb9u8
pihole | 2020-09-30T07:06:29.166699227Z
pihole | 2020-09-30T07:06:29.166865402Z [i] Installing configs from /etc/.pihole...
pihole | 2020-09-30T07:06:29.180195203Z [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
[✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf
pihole | 2020-09-30T07:06:29.280859312Z chown: cannot access '': No such file or directory
pihole | 2020-09-30T07:06:29.298740227Z chmod: cannot access '': No such file or directory
pihole | 2020-09-30T07:06:29.307306133Z chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
pihole | 2020-09-30T07:06:30.125083054Z Custom WEB_PORT set to 8080
pihole | 2020-09-30T07:06:30.125148196Z INFO: Without proper router DNAT forwarding to 192.168.88.23:8080, you may not get any blocked websites on ads
pihole | 2020-09-30T07:06:30.136294563Z ::: Pre existing WEBPASSWORD found
pihole | 2020-09-30T07:06:30.203226551Z Using custom DNS servers: 46.182.19.48 & 1.1.1.1
pihole | 2020-09-30T07:06:30.216489499Z DNSMasq binding to default interface: eth0
pihole | 2020-09-30T07:06:30.336473008Z Added ENV to php:
pihole | 2020-09-30T07:06:30.346998011Z "PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
pihole | 2020-09-30T07:06:30.347018704Z "ServerIP" => "192.168.88.23",
pihole | 2020-09-30T07:06:30.347030794Z "VIRTUAL_HOST" => "192.168.88.23",
pihole | 2020-09-30T07:06:30.383098337Z Using IPv4 and IPv6
pihole | 2020-09-30T07:06:30.383163105Z ::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early))
pihole | 2020-09-30T07:06:30.386860797Z https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
pihole | 2020-09-30T07:06:30.386922168Z https://mirror1.malwaredomains.com/files/justdomains
pihole | 2020-09-30T07:06:30.648508527Z ::: Testing pihole-FTL DNS: FTL started!
pihole | 2020-09-30T07:06:30.724073891Z ::: Testing lighttpd config: Syntax OK
pihole | 2020-09-30T07:06:30.725122627Z ::: All config checks passed, cleared for startup ...
pihole | 2020-09-30T07:06:30.727841844Z ::: Docker start setup complete
pihole | 2020-09-30T07:06:30.806931348Z [i] Neutrino emissions detected...
[✓] Pulling blocklist source list into range
pihole | 2020-09-30T07:06:30.836032750Z
[✓] Preparing new gravity database
pihole | 2020-09-30T07:06:30.918691496Z [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
[✓] Status: No changes detected
pihole | 2020-09-30T07:06:31.429330575Z [i] Received 56949 domains
pihole | 2020-09-30T07:06:31.429472468Z
pihole | 2020-09-30T07:06:31.429750947Z [i] Target: https://mirror1.malwaredomains.com/files/justdomains
[✓] Status: No changes detected
pihole | 2020-09-30T07:06:32.333814467Z [i] Received 26854 domains
pihole | 2020-09-30T07:06:32.334037474Z
[✓] Storing downloaded domains in new gravity database
[✓] Building tree
[✓] Swapping databases
pihole | 2020-09-30T07:06:33.652303754Z [i] Number of gravity domains: 83803 (83761 unique domains)
pihole | 2020-09-30T07:06:33.692276102Z [i] Number of exact blacklisted domains: 2
pihole | 2020-09-30T07:06:33.699159700Z [i] Number of regex blacklist filters: 0
pihole | 2020-09-30T07:06:33.705878372Z [i] Number of exact whitelisted domains: 0
pihole | 2020-09-30T07:06:33.712759836Z [i] Number of regex whitelist filters: 0
[✓] Cleaning up stray matter
pihole | 2020-09-30T07:06:33.743703148Z
pihole | 2020-09-30T07:06:33.754253839Z [✓] DNS service is running
pihole | 2020-09-30T07:06:33.762488533Z [✓] Pi-hole blocking is Enabled
pihole | 2020-09-30T07:06:33.811893796Z Pi-hole version is v5.1.2 (Latest: v5.1.2)
pihole | 2020-09-30T07:06:33.831863204Z AdminLTE version is v5.1.1 (Latest: v5.1.1)
pihole | 2020-09-30T07:06:33.849067633Z FTL version is v5.2 (Latest: v5.2)
pihole | 2020-09-30T07:06:33.851978605Z [cont-init.d] 20-start.sh: exited 0.
pihole | 2020-09-30T07:06:33.854466021Z [cont-init.d] done.
pihole | 2020-09-30T07:06:33.856628837Z [services.d] starting services
pihole | 2020-09-30T07:06:33.886950456Z Starting pihole-FTL (no-daemon) as root
pihole | 2020-09-30T07:06:33.888620508Z Starting crond
pihole | 2020-09-30T07:06:33.890490521Z Starting lighttpd
pihole | 2020-09-30T07:06:33.914126431Z [services.d] done.
pihole | 2020-09-30T07:08:51.632699092Z [cont-finish.d] executing container finish scripts...
pihole | 2020-09-30T07:08:51.636129651Z [cont-finish.d] done.
pihole | 2020-09-30T07:08:51.638099513Z [s6-finish] waiting for services.
pihole | 2020-09-30T07:08:51.956701479Z Stopping cron
pihole | 2020-09-30T07:08:51.958199902Z Stopping lighttpd
pihole | 2020-09-30T07:08:51.959819414Z Stopping pihole-FTL
pihole | 2020-09-30T07:08:52.004488686Z s6-svwait: fatal: supervisor died
pihole | 2020-09-30T07:08:52.206334299Z [s6-finish] sending all processes the TERM signal.
pihole | 2020-09-30T07:08:55.218258506Z [s6-finish] sending all processes the KILL signal and exiting.
pihole exited with code 128
Any ideas why this happens?
@Bouni probably the container is starting up, but that it then dies. I think your next task is for you to figure out why your pihole configuration runs fine manually but crashes at boot time. Perhaps there are some missing dependencies - either network or volume mounts - that is not available yet.
ports:
- "8053:8080/tcp"
- "8443:443/tcp"
- "5353:53/tcp"
- "5353:53/udp"
- "192.168.88.23:53:53/tcp"
- "192.168.88.23:53:53/udp"
Why so many ports? There's nothing in the container that will answer or use most of those maps.
@Bouni probably the container is starting up, but that it then dies. I think your next task is for you to figure out why your pihole configuration runs fine manually but crashes at boot time. Perhaps there are some missing dependencies - either network or volume mounts - that is not available yet.
+1 to that and to add to it, if you wait a couple minutes after reboot and issue docker start pihole
again does it startup ok? That would indicate the service is being auto-started too early in the stack of dependencies for sure if yes.
If that isn't the case then I'd turn to the internal container contents, specifically lighttpd / pihole-FTL logs under /var/log to see what they have to say - it could be those services bombed and caused the container to die. But again, it maybe due to starting up too early post reboot so check that first.
Hi,
@lightswitch05 I just don't know where to search? The two mounted volumes are in the same directory as the docker-compose file:
volumes:
- './pihole/etc/pihole/:/etc/pihole/'
- './pihole/etc/dnsmasq.d/:/etc/dnsmasq.d/'
I thing this line from the logs is the key but I need to somehow debug what's going on:
s6-svwait: fatal: supervisor died
@dschaper
Why so many ports? There's nothing in the container that will answer or use most of those maps.
I had trouble getting pihole up and running and tried so many things that I copied configs from several issues and the ports section whas amongst them 😄 I guess this would be sufficant, right? :
ports:
- "8053:8080/tcp"
- "192.168.88.23:53:53/tcp"
- "192.168.88.23:53:53/udp"
Do I actually need to have the host IP in front of the ports?
@diginc
if you wait a couple minutes after reboot and issue docker start pihole again does it startup ok? I can issue a docker-compose up -d directly, without any waiting time, and the pihole container starts up just fine! Is there a war to dely container start?
@dschaper You we're right about the ports, I reduced it to
ports:
- "8053:8080/tcp"
- "53:53/tcp"
- "53:53/udp"
and everything works. nevertheless my initial problem remains.
I mapped a volume to the var/log folder of the container so that I can easily search the logs.
volumes:
- './pihole/log/:/var/log/'
But I don't see anything suspicious in them. Any tips where to look for problems in the logs?
I still don't think you need to map 8053
to 8080
since there's nothing in the image that operates on that port?
As for the volume map, lighttpd
is very sensitive to permissions, if the daemon inside the container does not have read/write with the www-data
user then lighttpd
will not even start.
@dschaper I have set the web port to 8080:
environment:
- WEB_PORT=8080
and I use port 8053 to access it. Maybe I should change the webport to 8053 🤔
As for the volume map, lighttpd is very sensitive to permissions, if the daemon inside the container does not have read/write with the www-data user then lighttpd will not even start. But that does not seem to be an issue with a starting container, rigth? It would fail on any restart attempt after the boot process as well.
Any ideas where to look for the problem?
Why are you changing the port inside the container? Are you running with a macvlan configuration of host networking?
If I remember correctly that was one of my countless tries getting pihole up and running. I'll change it back to 80 and map that to 8053 an see if that changes anything regarding my boot problem.
🤯
I don't know if removing WEB_PORT
and change the mapping to 8053:80
did the trick, but I just rebooted the server and pihole came up on its own!
In case anybody faces a similar problem, this is my docker-compose section for pihole:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- "8053:80"
- "53:53/tcp"
- "53:53/udp"
environment:
- TZ='Europe/Berlin'
- ServerIP=192.168.88.23
- DNS1=46.182.19.48
- DNS2=1.1.1.1
dns:
- 127.0.0.1
- 46.182.19.48
volumes:
- './pihole/etc/pihole/:/etc/pihole/'
- './pihole/etc/dnsmasq.d/:/etc/dnsmasq.d/'
restart: always
The only time you need to change the port inside the container is when the container is directly using the hosts network. The container uses the hosts port in host networking mode, if port 80 is being used on the host then it can't be used in the container. That is what that environment variable is for.
In the other modes there is a separate network for docker. (Taking a lot of liberties there, it's all the host networking but I don't want to lose people in the details.) Thus the containers are free to use what ever port they need on that separate network, it's up to docker to handle the port mappings.
I have a very similar problem...
When I reboot the host on which my Pi-hole container is running, the Pi-hole container does not automatically start, instead it is in exited
state. But if I start the Pi-hole container manually afterwards, then it starts up fine. The container has a restart polify of always
, but that does not seem to help.
This is what I see in the logs of the Pi-hole container:
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service _postFTL: stopping
s6-rc: info: service _postFTL successfully stopped
s6-rc: info: service lighttpd: stopping
Stopping lighttpd
s6-rc: info: service lighttpd successfully stopped
s6-rc: info: service pihole-FTL: stopping
Stopping pihole-FTL
s6-rc: info: service pihole-FTL successfully stopped
s6-rc: info: service _startup: stopping
s6-rc: info: service _startup successfully stopped
s6-rc: info: service _uid-gid-changer: stopping
s6-rc: info: service _uid-gid-changer successfully stopped
s6-rc: info: service cron: stopping
Stopping cron
s6-rc: info: service cron successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
My Docker compose file for Pi-hole looks like this:
services:
pihole:
container_name: pi-hole
image: pihole/pihole:latest
restart: always
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
ports:
- "192.168.0.128:53:53/tcp" # DNS TCP port - important to bind to the IP of the host, otherwise DNS resolution won't work in Docker containers
- "192.168.0.128:53:53/udp" # DNS UDP port - important to bind to the IP of the host, otherwise DNS resolution won't work in Docker containers
# - "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
- "8001:80/tcp" # HTTP port
environment:
WEBPASSWORD: '<PI_HOLE_PASSWORD>'
DNSMASQ_LISTENING: all
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- pihole-etc:/etc/pihole
- pihole-etc-dnsmasq.d:/etc/dnsmasq.d
volumes:
pihole-etc:
name: pihole-etc
pihole-etc-dnsmasq.d:
name: pihole-etc-dnsmasq.d
I was able to narrow down the problem to the following DNS port mappings in the Docker compose file:
ports:
- "192.168.0.128:53:53/tcp"
- "192.168.0.128:53:53/udp"
If I change it to the following, then it works well (Pi-hole container starts fine after reboot):
ports:
- "53:53/tcp"
- "53:53/udp"
Note that 192.168.0.128
there is the local IP address of the host on which the Pi-hole container is running.
The problem is, unfortunately, that I do need to map the DNS port 53 in the original form (using the host IP address in the mapping), otherwise DNS resolving does not work in any other containers which are defined in other Docker compose files.
I'm not sure why mapping the DNS port 53 using the host IP causes the PI-hole container not to start up, but I've noticed that very rarely it works even with this mapping (maybe 1 out of 10 times), so I guess that mapping using the host IP address introduces some kind of dependency which is usually not yet available when the Pi-hole container is starting.
Not sure how to solve this to have the Pi-hole container starting automatically after the host reboot, but also to have working DNS resolving in other containers.
I have the same issue as @bazsodombiandras
I fixed it using a systemd unit which is delayed by 5 seconds (as network-online.target wasn't enough). My device is connected to the internet via WLAN, which may introduce some delay until it is connected.
Here's the unit file:
[Unit]
Description=Start Docker Container pihole
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStartPre=/bin/sleep 5
ExecStart=/usr/bin/docker start pihole
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
Versions
Platform
Expected behavior
I expect the container to automatically start when I reboot the host machine.
My docker-compose.yaml looks like this:
You see that I've set restart to
always
. I start the container withdocker-compose up -d
Actual behavior / bug
The container does not start when I reboot the host machine.
Steps to reproduce
docker-compose up -d
sudo reboot
docker-compose ps -a
Debug Token
Additional context
When I initially start the container with
docker-compose up -d
I get these logs fromdocker-compose logs -f -t
:docker-compose ps -a
gives me this output:After reboot, I see these logs (beginning where the reboot started):
docker-compose ps -a
gives me this output: