Closed easy-easy closed 6 months ago
What did you use as the labels? What's in Caddy's logs?
my labels are the following:
labels:
caddy: app.dev.mb
caddy.0_import: common
caddy.1_import: log app-dev
caddy.reverse_proxy: '{{ upstreams 80 }}'
The snippets in my Caddyfile:
(common) {
tls /etc/caddy/wildmb.crt /etc/caddy/wildmb.key
}
(log) {
log {
output file /var/log/caddy/{args[0]}.log {
roll_size: 1gb
roll_local_time
roll_keep: 3
roll_keep_for: 120d
}
format json
}
}
The logentry for the request:
{"level":"info","ts":1703059069.9339566,"logger":"http.log.access.log6","msg":"handled request","request":{"remote_ip":"172.22.0.1","remote_port":"41494","client_ip":"172.22.0.1","proto":"HTTP/2.0","method":"GET","host":"app.dev.mb","uri":"/.api/call/index","headers":{"User-Agent":["curl/8.5.0"],"Accept":["*/*"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"app.dev.mb"}},"bytes_read":0,"user_id":"","duration":0.000545562,"size":0,"status":308,"resp_headers":{"Server":["Caddy","Caddy"],"Location":["https://app.dev.mb/.api/call/index"],"Date":["Wed, 20 Dec 2023 07:57:49 GMT"],"Content-Length":["0"]}}
Caddyversion: 2.7.4
In the caddylog itself, there is no error or notice entry.
So just a basic setup with a self signed wildcard certificate instead a lets encrypt one. But there is no SSL-Issue whatsoever.
Regards.
Weird. I'll need to defer to @lucaslorentz about this; why do stopped containers still generate Caddyfile config?
I dont know.
Here is the config entry with a running container:
{
"handle" : [
{
"handler" : "subroute",
"routes" : [
{
"handle" : [
{
"handler" : "reverse_proxy",
"upstreams" : [
{
"dial" : "172.22.0.31:80"
}
]
}
]
}
]
}
],
"match" : [
{
"host" : [
"app.dev.mb"
]
}
],
"terminal" : true
}
and here if the container is stopped: (but not removed)
{
"handle" : [
{
"handler" : "subroute",
"routes" : [
{
"handle" : [
{
"handler" : "reverse_proxy",
"upstreams" : [
{
"dial" : ":80"
}
]
}
]
}
]
}
],
"match" : [
{
"host" : [
"app.dev.mb"
]
}
],
"terminal" : true
},
As you can see, it is basically the same, just the ip from the container is missing. As soon as I remove the container with docker container rm ...
the config entry also disappears, what makes sense..
I think the best would be a label entry like
caddy.no_upstreams: "respond 'Error...' 500"
or something like this, so if there a no upstreams available, that the config entry is a direct response entry.
Regards...
(edit: changed example label entry it match respond config of caddy)
Weird. I'll need to defer to @lucaslorentz about this; why do stopped containers still generate Caddyfile config?
It shouldn't scan stopped containers unless --scan-stopped-containers
is set.
But it looks like I accidentally made scan-stopped-containers
default to true. I will fix that in another PR. A breaking change, unfortunately, but changing to scan stopped containers by default was an even bigger breaking change, so it's better to restore it to false.
Hello,
I am using docker compose for my setup. I have dev instances of my containers, which I'm running with
docker compose up app-dev
If this app crashes, or I stop it via ctrl-c, the container itself is not removed it get the status "stopped":
824710605cde app-dev:latest "/usr/local/bin/pyth…" 2 hours ago Exited (0) 10 minutes ago
If I do any requests into this stopped container, this result in a 308 redirect to itself:
The reverse proxy entry for the stopped container is still in the caddy configuration. But the dial just have ":80" instead "container-ip:80", so I think this is a proxy error and not a caddy error. (But I'm not 100% sure about that).
If this is not a bug, how can I avoid this 308 redirects of crashed/stopped but not removed containers?
Reagards, easy.