Closed typokign closed 4 years ago
Did you check to see if your configured page was reachable on another port? Not sure why, but my config works in addition to the default page. I just don't really allow any traffic to hit that port, and I'm good.
Did you check to see if your configured page was reachable on another port? Not sure why, but my config works in addition to the default page. I just don't really allow any traffic to hit that port, and I'm good.
Sorry for the delay, yes, I'm certain that it is only serving the default page on port 2015, and not responding on any other ports.
Can you run "docker inspect
[
{
"Id": "18216d5cc478496f8cc87cdeaeba28fd1b161b3d37c2504026693a8c27f871ff",
"Created": "2019-10-08T04:40:32.818267264Z",
"Path": "/bin/parent",
"Args": [
"caddy",
"-ca",
"https://acme-staging-v02.api.letsencrypt.org/directory"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 6859,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-10-08T04:40:34.386296443Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:7b7636ac875d205d9b30b52a0eb9f8db65998524b393338786f27d9f33766e41",
"ResolvConfPath": "/var/lib/docker/containers/18216d5cc478496f8cc87cdeaeba28fd1b161b3d37c2504026693a8c27f871ff/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/18216d5cc478496f8cc87cdeaeba28fd1b161b3d37c2504026693a8c27f871ff/hostname",
"HostsPath": "/var/lib/docker/containers/18216d5cc478496f8cc87cdeaeba28fd1b161b3d37c2504026693a8c27f871ff/hosts",
"LogPath": "",
"Name": "/caddy",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/home/web/config/Caddyfile:/etc/Caddyfile",
"/home/web/ssl:/root/.caddy:rw",
"/home/web/static:/www:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "none",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"443/tcp": [
{
"HostIp": "",
"HostPort": "8443"
}
],
"80/tcp": [
{
"HostIp": "",
"HostPort": "8080"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": true,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/c5f3ed1394a4bfe48c73158a08ff2c781aa012c9468aaf7b0b4bc4d62f95132f-init/diff:/var/lib/docker/overlay2/4c51bdf03ef34902ec15d5872e84b1de8552cf122bfcbd3498e69af8736f57b2/diff:/var/lib/docker/overlay2/ae9bc8dd773bf03ed6103955054f528aeb05f24733f4c3ce6ec974d569bfbeeb/diff:/var/lib/docker/overlay2/0af867539402602da3747dbcd206b48e4d607879171727950fad5600ffd0f56c/diff:/var/lib/docker/overlay2/df0a6eb2708f662c74fb78776c321f8b8a20abe0e5f126575b1e9e985677386a/diff:/var/lib/docker/overlay2/eee6392fd2f5b66aea396e5fceb413f519e9630034be003c7247dd452feb8837/diff:/var/lib/docker/overlay2/5b0b806f83372daf0fb9193947db13f446c3c3d284d42771dd331f22902a0e1a/diff:/var/lib/docker/overlay2/a562e47c54dae6b75b81a2d6246953c458a5f3d80ef445f07e05045b32071a99/diff",
"MergedDir": "/var/lib/docker/overlay2/c5f3ed1394a4bfe48c73158a08ff2c781aa012c9468aaf7b0b4bc4d62f95132f/merged",
"UpperDir": "/var/lib/docker/overlay2/c5f3ed1394a4bfe48c73158a08ff2c781aa012c9468aaf7b0b4bc4d62f95132f/diff",
"WorkDir": "/var/lib/docker/overlay2/c5f3ed1394a4bfe48c73158a08ff2c781aa012c9468aaf7b0b4bc4d62f95132f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "a02e2e9baab7154c0d0c253f8f3216d1b2b456dd1f6c19a59507d7ef80c4018f",
"Source": "/var/lib/docker/volumes/a02e2e9baab7154c0d0c253f8f3216d1b2b456dd1f6c19a59507d7ef80c4018f/_data",
"Destination": "/srv",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/home/web/config/Caddyfile",
"Destination": "/etc/Caddyfile",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/web/ssl",
"Destination": "/root/.caddy",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/web/static",
"Destination": "/www",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "18216d5cc478",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"ExposedPorts": {
"2015/tcp": {},
"443/tcp": {},
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"ACME_AGREE=true",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ENABLE_TELEMETRY="
],
"Cmd": [
"-ca",
"https://acme-staging-v02.api.letsencrypt.org/directory"
],
"Image": "abiosoft/caddy:1.0.3",
"Volumes": {
"/root/.caddy": {},
"/srv": {}
},
"WorkingDir": "/srv",
"Entrypoint": [
"/bin/parent",
"caddy"
],
"OnBuild": null,
"Labels": {
"caddy_version": "1.0.3",
"maintainer": "Abiola Ibrahim <abiola89@gmail.com>"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "3c3e2518f9e74fe94fda809af8f07735dcf2611974bc868a99e8a0fc37c04bd3",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"2015/tcp": null,
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8443"
}
],
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
}
]
},
"SandboxKey": "/var/run/docker/netns/3c3e2518f9e7",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "b9b1f2537a03957bd171d949ef7bc894c20db2ac5708fb30a4a2571e52be7fad",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "7d470b673e2fee46657c5cdd53b493e00d966eae0797d44615383ae7d41ed8e0",
"EndpointID": "b9b1f2537a03957bd171d949ef7bc894c20db2ac5708fb30a4a2571e52be7fad",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
Can we see a Caddyfile you’re trying to use? Everything looks good there, especially since you said the Caddyfile inside the container is readable, and matches what’s on the host.
Did you try curl-ing localhost on https/8443, and http/8080? Looks like that’s where you have 80, and 443 mapped.
Yep, I'm curling 8443 and 8080.
Current Caddyfile (though I've tried with other, simpler caddyfiles as well):
(common) {
tls david@typokign.com
gzip
}
typokign.com {
import common
redir https://www.typokign.com{uri}
}
www.typokign.com {
import common
root /www
}
matrix.typokign.com {
import common
proxy / localhost:10080
}
riot.typokign.com {
import common
proxy / localhost:10080
}
mail.typokign.com {
import common
proxy / localhost:11080
}
Okay, I think I see the problem now. Have you tried curl-ing and setting the Host header? Like this:
curl http://localhost:2015 -H “Host: www.typokign.com”
Everything might be on 2015 right now because you didn’t specify a port in your Caddyfile. Consequently, without specifying the host header, you get the default Caddy page. Explicitly add the port you want each directive to serve, and add the host header, and you should be good.
Thanks, I've added the ports but I am still only serving the default page on port 2015:
(common) {
tls david@typokign.com
gzip
}
:8080 {
redir https://{host}{uri}:8443
}
https://typokign.com:8443 {
import common
redir https://www.typokign.com:8443{uri}
}
https://www.typokign.com:8443 {
import common
root /www
}
https://matrix.typokign.com:8443 {
import common
proxy / localhost:10080
}
https://riot.typokign.com:8443 {
import common
proxy / localhost:10080
}
https://mail.typokign.com:8443 {
import common
proxy / localhost:11080
}
Logs:
Oct 14 23:57:34 casa caddy[29977]: Serving HTTP on port 2015
Oct 14 23:57:34 casa caddy[29977]: http://:2015
Curling that port with any value of the host header returns the welcome to Caddy page.
The only thing that could explain that is that there's an error in your Caddyfile and Caddy just quits reading after it sees it (which seems unlikely given you've tried simpler Caddyfiles, and that's not Caddy's style), or the Caddyfile you've defined isn't actually mounted, and readable inside your container. I wish I could say this was something Caddy was messing up, but I've deployed this container inside three production networks and I've never had Caddy ignore my Caddyfile.
Just to clarify, here's how I'd troubleshoot (run these inside the Caddy container):
ls -al /etc/Caddyfile
: should show that the file is readable at least by the user that Caddy is running as ( rw-r--r-- or something similar)cat /etc/Caddyfile
: should show the same Caddyfile as outside the container.Making sure that every time you make a change how you're configuring your container, you check those two things. You can't change the config and then restart Caddy inside the container (unless you use some sort of non-standard entry point). The container's life depends on Caddy running. If Caddy is restarted, the container goes away.
How are you deploying the container? Just locally? Swarm? Compose? Can we see how you're starting the container?
Commands in container:
~ # ls -al /etc/Caddyfile
-r-------- 1 root root 504 Oct 14 23:57 /etc/Caddyfile
~ # whoami
root
~ # cat /etc/Caddyfile
(common) {
tls david@typokign.com
gzip
}
:8080 {
redir https://{host}{uri}:8443
}
https://typokign.com:8443 {
import common
redir https://www.typokign.com:8443{uri}
}
https://www.typokign.com:8443 {
import common
root /www
}
https://matrix.typokign.com:8443 {
import common
proxy / localhost:10080
}
https://riot.typokign.com:8443 {
import common
proxy / localhost:10080
}
https://mail.typokign.com:8443 {
import common
proxy / localhost:11080
}
~ #
I am deploying the container with a systemd unit that simply docker run
s the container. Here's the full unit file:
[Unit]
Description=Caddy static site and reverse proxy
Requires=docker.service
After=docker.service
[Service]
Type=simple
ExecStartPre=-/usr/bin/docker kill caddy
ExecStartPre=-/usr/bin/docker rm caddy
ExecStart=/usr/bin/docker run --rm --name caddy \
--log-driver=none \
-p 8080:8080 \
-p 8443:8443 \
-v /home/web/config/Caddyfile:/etc/Caddyfile \
-v /home/web/ssl:/root/.caddy:rw \
-v /home/web/static:/www:ro \
-e "ACME_AGREE=true" \
abiosoft/caddy:1.0.3-no-stats -ca https://acme-staging-v02.api.letsencrypt.org/directory
ExecStop=-/usr/bin/docker kill caddy
ExecStop=-/usr/bin/docker rm caddy
ExecReload=/usr/bin/docker exec caddy kill -HUP 1
Restart=always
RestartSec=30
SyslogIdentifier=caddy
[Install]
WantedBy=multi-user.target
How are you looking at the logs with the log-driver set to “none”?
Also, it might be a bit late for this, but a single node swarm is probably a much more canonical way to get this going. Then you could declare your Caddyfile as a config, and use a named volume for your certs.
docker swarm init
docker service create caddy ... (see service command help for all the details)
Systemd doesn’t care if your container dies, swarm does.
Logs get piped straight to journald. This is kind of an old box with a lot of non-dockerized services managed by systemd so I'd like to keep things consistent.
Hmm, well, the only other thing that I could think of is that Caddy is not running as root. Sure you exec into the container as root, but that doesn’t necessarily mean caddy ran as root. Can you do a ps -ef
inside the container?
It could also be some weird cgroup permission thing since you’re actually running the container from the system slice. Perhaps, since you exec in from the user slice, you’re able to read the Caddyfile, but otherwise the “service” doesn’t have access? It’s a bit of a reach, but docker does it’s own cgroup isolation that is maybe interfering with what the system is also doing.
If you’re super against swarm, you could always use —restart=aways
in your Docker run command. That will take care of reboots, and service failures. Then, caddy will go up/down along with docker. It would then eliminate any potential issues with cgroup permissions.
/srv # ps -ef
PID USER TIME COMMAND
1 root 0:00 /bin/parent caddy -ca https://acme-staging-v02.api.letsencrypt.org/directory
13 root 0:00 /usr/bin/caddy -ca https://acme-staging-v02.api.letsencrypt.org/directory
31 root 0:00 /bin/sh
36 root 0:00 ps -ef
Adding --restart=always
to the run command and removing --rm
didn't work.
Wasn’t suggesting that would fix it exactly. Was suggesting that perhaps the fact that it’s kicked off in a service slice might be making things weird.
If you stop the caddy “service”, and run the container manually, does that work?
Ah ha! I've made an important discovery. Launching the container with the -ca
flag, either as a user or from systemd, causes Caddy to fall back to the default Caddyfile. Omitting it causes Caddy to correctly parse the Caddyfile! :tada:
However, looking in journald I am now hanging on this line:
2019/10/16 05:58:41 [INFO][typokign.com] Obtain certificate
Caddy is unresponsive on all ports. I've modified my Caddyfile to run on standard ports 80/443:
(common) {
tls david@typokign.com
gzip
}
:80 {
redir https://{host}{uri}:443
}
https://typokign.com {
import common
redir https://www.typokign.com{uri}
}
https://www.typokign.com {
import common
root /www
}
https://matrix.typokign.com {
import common
proxy / localhost:10080
}
https://matrix.typokign.com:8448 {
import common
proxy / localhost:10448
}
https://riot.typokign.com {
import common
proxy / localhost:10080
}
https://mail.typokign.com {
import common
proxy / localhost:11080
}
Curling localhost:80/443 from the host just returns an empty response. I've modified the docker run args to reflect the new ports. Any idea why Caddy would be hanging on obtaining a cert?
Thank you!
Nevermind, not sure what broke but I just removed the lock file and everything seems to be working. Thanks for all of your help @paullj1 . Thread closed.
Hi there. I am running into an issue where caddy-docker is ignoring the Caddyfile I have mounted at
/etc/Caddyfile
, and instead serving the welcome page over port 2015.I have verified this by:
docker exec
'ing into the containerVersion: 1.0.3 Host OS: Ubuntu Server 18
Logs:
I have tried this with a number of different Caddyfiles, including examples from Caddy documentation, so I don't believe it's an issue with any particular caddyfile