Open chrisbennight opened 2 weeks ago
Interesting that you are having DNS issues within the container. I run this image on a variety of Linux hosts and the majority of my images are alpine based and haven't run into it. Whats your host OS?
I typically also use a multi network configuration within the docker stack using the bridge
driver.
Ubuntu 22.04.5, though on kernel 6.8
Example below - 172.30.0.2 is the right answer, the first nslookup is for some reason skipping docker resolution (192.168.0.20 was returned because I have a wildcard in dnsmasq on my dns server).
[tiredofit/db-backup:4.1.4 23:38:34 /] $ nslookup obslivesync
Server: 127.0.0.11
Address: 127.0.0.11:53
** server can't find obslivesync.tailb(...).ts.net: NXDOMAIN
** server can't find obslivesync.tailb(...).ts.net: NXDOMAIN
Non-authoritative answer:
Non-authoritative answer:
Name: obslivesync.domain.tld
Address: 192.168.0.20
[tiredofit/db-backup:4.1.4 23:38:49 /] $ apk update && apk add bind-tools
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/community/x86_64/APKINDEX.tar.gz
v3.20.3-171-ge0d8a949f52 [https://dl-cdn.alpinelinux.org/alpine/v3.20/main]
v3.20.3-170-gb7926213bd4 [https://dl-cdn.alpinelinux.org/alpine/v3.20/community]
OK: 24186 distinct packages available
(1/6) Installing fstrm (0.6.1-r4)
(2/6) Installing json-c (0.17-r0)
(3/6) Installing protobuf-c (1.5.0-r0)
(4/6) Installing libuv (1.48.0-r0)
(5/6) Installing bind-libs (9.18.27-r0)
(6/6) Installing bind-tools (9.18.27-r0)
Executing busybox-1.36.1-r29.trigger
OK: 725 MiB in 188 packages
[tiredofit/db-backup:4.1.4 23:39:34 /] $ nslookup obslivesync
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: obslivesync
Address: 172.30.0.2
[tiredofit/db-backup:4.1.4 23:39:38 /] $
that said, my solution doesn't completely fix things; nslookup and ping now resolve fine, but curl still has issues:
[tiredofit/db-backup:4.1.4 23:39:38 /] $ curl obslivesync:5984
curl: (7) Failed to connect to obslivesync port 5984 after 5 ms: Could not connect to server
[tiredofit/db-backup:4.1.4 23:43:37 /] $ ping obslivesync
PING obslivesync (172.30.0.2) 56(84) bytes of data.
64 bytes from obslivesync.obsidian-livesync_obsidian (172.30.0.2): icmp_seq=1 ttl=64 time=0.117 ms
64 bytes from obslivesync.obsidian-livesync_obsidian (172.30.0.2): icmp_seq=2 ttl=64 time=0.064 ms
64 bytes from obslivesync.obsidian-livesync_obsidian (172.30.0.2): icmp_seq=3 ttl=64 time=0.060 ms
64 bytes from obslivesync.obsidian-livesync_obsidian (172.30.0.2): icmp_seq=4 ttl=64 time=0.063 ms
64 bytes from obslivesync.obsidian-livesync_obsidian (172.30.0.2): icmp_seq=5 ttl=64 time=0.049 ms
^C
--- obslivesync ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4107ms
rtt min/avg/max/mdev = 0.049/0.070/0.117/0.023 ms
[tiredofit/db-backup:4.1.4 23:43:55 /] $ curl obslivesync:5984
curl: (7) Failed to connect to obslivesync port 5984 after 4 ms: Could not connect to server
[tiredofit/db-backup:4.1.4 23:44:04 /] $ curl 172.30.0.2:5984
{"error":"unauthorized","reason":"Authentication required."}
[tiredofit/db-backup:4.1.4 23:44:21 /]
nothing fancy in the docker compose
services:
obslivesync:
image: couchdb:latest
container_name: obslivesync
user: 1000:1000
networks:
- t3_proxy
- obsidian
environment:
- COUCHDB_USER=...
- COUCHDB_PASSWORD=...
volumes:
- "cdb-data:/opt/couchdb/data"
- "cdb-config:/opt/couchdb/etc/local.d"
restart: unless-stopped
labels:
- "com.centurylinklabs.watchtower.enable=true"
- "traefik.enable=true"
- "traefik.http.routers.obsidian-rtr.entrypoints=websecure"
- "traefik.http.routers.obsidian-rtr.rule=Host(`obsync.domain.tld`)"
- "traefik.http.routers.obsidian-rtr.middlewares=secured@file"
- "traefik.http.routers.obsidian-rtr.service=obsidian-svc"
- "traefik.http.services.obsidian-svc.loadbalancer.server.port=5984"
obs_db_backup:
container_name: obs_db_backup
image: tiredofit/db-backup
networks:
- obsidian
depends_on:
- obslivesync
volumes:
- obs-db-backup:/backup
environment:
- DB01_TYPE=couch
- DB01_HOST=obslivesync
- DB01_NAME=obsync
- DB01_USER=...
- DB01_PORT=5984
- DB01_PASS=...
- DB01_DUMP_FREQ=720
- DB01_CLEANUP_TIME=72000
- DEFAULT_CHECKSUM=SHA1
- DEFAULT_COMPRESSION=GZ
- DB01_SPLIT_DB=true
- CONTAINER_ENABLE_MONITORING=FALSE
restart: unless-stopped
labels:
- "com.centurylinklabs.watchtower.enable=true"
networks:
t3_proxy:
external: true
obsidian:
volumes:
(...)
feels like #muslc/alpine things to me, not docker-db-backup issues. just manually setting private ip's for now
aha, yep, alpine things - https://github.com/gliderlabs/docker-alpine/issues/574#issuecomment-1879741479
adding a . at the end also results in correct resolution without adding bindtools - obslivesync.
Musl has some interesting issues that creep up under certain environments for sure.
You can try adding CONTAINER_POST_INIT_COMMAND="apk update ; apk add bind-tools"
which might help you? I built a couple weird things into the base image to execute commands after init for precisely this purpose without having to muck about with forking/running your own version of image or putting too many more utils in the base image.
There is another one that allows you to execute a script, although that will require you to export a volume with said script. The command above should work, if not try without the quotes..
Reference: https://github.com/tiredofit/docker-alpine?tab=readme-ov-file#container-options
Maybe related - Also, I saw a ts.net
in the above output. I wonder if that is overriding things as it's known to have its own resolver I think at 100.100.100.53 if I recall correctly. Has been a while since I have run ts. Newer unreleased images of my base have TS baked right in..
Any chance of including bind-tools as one of the apks in the dockerfile? ref: https://github.com/nodejs/docker-node/issues/339
It only adds ~1MB and addresses alpine DNS resolution issues
Specifically when I have multiple networks I can't get container names to resolve in the docker-db-backup image. exec'ing in and running
apk update && apk add bind-tools
fixes it.Happy to submit a PR, just wanted to see if there was a willingness first.