Open psyciknz opened 2 years ago
Ok, so I think ive worked out a bit more. I will use (at this stage) the standalone model. I'l managed to build a new caddy-proxy-docker as i also need cloudflare for dns challenge. Currently in "the old way" I have:
*.internal.example.com:443 {
tls letsencrypt@example.com {
dns cloudflare <token>
}
}
Where I'd then put a site for reverse proxying: eg photoprisim as photos.internal.example.com
So with caddy-proxy, I have the follwing lapbels on photoprism:
caddy: photos.internal.example.com
caddy.reverse_proxy: "{{upstreams 8091}}"
Where my original ports for photoprism was 8091:2342. So questions.
Thanks in advance.
So to answer my own questions, I need to be upstreaming 2342 (not sure how you do multiple services on same internal port?)
But for the wildcarding etc..... I found in #304 some instructions. So with my *.internal.example.com in the caddy file I've added the following to the photoprism docker compose:
labels:
autoheal=true
com.centurylinklabs.watchtower.enable=true
caddy=*.internal.example.com
caddy.1_@photos = host photos.internal.example.com
caddy.1_handle = @photos
caddy.1_handle.reverse_proxy = {{ upstreams 2342 }}
So I can see photoprism showing up in the caddy-proxy log, but see errors relating to those labels:
{"level":"info","ts":1657658539.6641893,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR] Removing invalid block: parsing caddyfile tokens for 'handle': Caddyfile:29 - Error during parsing: unrecognized directive: reverse_proxy - are you sure your Caddyfile structure (nesting and braces) is correct?\n*.internal.example.com {\n\t\"@photos\" host photos.internal.example.com\n\thandle {\n\t\t\"reverse_proxy \" 172.29.0.3:2342\n\t}\n\t\"handle \" @photos\n}\n\n"}
Bit further one, any ideas for this one?
Is this method of operation supported?
This proxy doesn't support integrating multiple docker hosts without swarm.
Would this be docker instances on each host?
That would be an option, in this case, each host will have its own caddy-proxy, which will only serve the containers within that host.
And for caddy-docker-proxy to find everything, does each proxied service have to be in a single caddy network? or does it query the docker host for labels?
Caddy-docker-proxy will not find labels or containers from other machines. The ideal setup is to configure a swarm cluster with the 3 docker hosts.
Is my upstreams right, am I upstreaming 8091 or 2342?
2342, since communication happens within the docker network, not via the host.
To get caddy-proxy to use the *.internal.example.com wildcard (possible?) or at least a cert for photos.internal.example.com using cloudflare for challenge, what would I put in the compose for caddy-proxy?
You'd need to use a config pattern like this https://caddyserver.com/docs/caddyfile/patterns#wildcard-certificates which is kinda tricky to do with CDP.
@lucaslorentz can clarify if the merging behaviour will work... but I think you could put the TLS + DNS plugin config on your CDP container itself (because it's not specific to a service) and then you'd add matchers + handle
labels for each subdomain on each service, and the proxy within the handle.
Is this method of operation supported?
This proxy doesn't support integrating multiple docker hosts without swarm.
Would this be docker instances on each host?
That would be an option, in this case, each host will have its own caddy-proxy, which will only serve the containers within that host.
Cool, yeah i thought as much, not sure I'm ready to swarm - as the machines have different purposes....But I'll work something out, probably caddy on each. You'll see I've progressed further, and trying to get wildcard cloudflare dns certs working off a label.
@lucaslorentz can clarify if the merging behaviour will work... but I think you could put the TLS + DNS plugin config on your CDP container itself (because it's not specific to a service) and then you'd add matchers +
handle
labels for each subdomain on each service, and the proxy within the handle.
So I think I've done what you said, as https://github.com/lucaslorentz/caddy-docker-proxy/issues/384#issuecomment-1182487165 mentions.
I have the *.internal.example.com in the Caddyfile for the caddy container. But it's workign out the specific references to it in the photoprism container. Its picking up some of it, and getting the host I want, but then getting an error.
You can try to drop the handle directive. And use Reverse_proxy matchers:
Container 1:
caddy=*.internal.example.com
caddy.@photos = host photos.internal.example.com
caddy.reverse_proxy = @photos {{ upstreams 2342 }}
Container 2:
caddy=*.internal.example.com
caddy.@something = host something.internal.example.com
caddy.reverse_proxy = @something {{ upstreams 2342 }}
I believe the merged caddyfile in this case is:
*.internal.example.com {
@photos host photos.internal.example.com
@something host something.internal.example.com
reverse_proxy @photos CONTAINER_1_IP
reverse_proxy @something CONTAINER_2_IP
}
You can drop the handle
if all you're ever doing is reverse_proxy
but if you have other things you want to do per host (e.g. encode gzip
for compression maybe), then you need the handle
(for all of them) to separate them from eachother, and to properly have a "fallback" for unmatched hosts.
So labels are currently:
labels:
- com.centurylinklabs.watchtower.enable=true
- caddy=*.internal.example.com
- caddy.@photos = host photos.internal.example.com
- caddy.reverse_proxy = @photos {{ upstreams 2342 }}
And getting an error on the reverse proxy command.
{"level":"info","ts":1657660707.7868304,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR] Removing invalid block: Caddyfile:27: unrecognized directive: reverse_proxy \n*.internal.example.com {\n\t\"@photos \" host photos.internal.example.com\n\t\"reverse_proxy \" @photos 172.29.0.3:2342\n}\n\n"}
Also where should I expect to see the merged caddyfile? I know where my original one is that I push into the container (/etc/caddy/Caddfyfile)
Looks like you have a whitespece at the end of your reverse_proxy label key. Are you using docker-compose? Maybe try changing to:
- caddy.@photos=host photos.internal.example.com
- caddy.reverse_proxy=@photos {{ upstreams 2342 }}
Final caddyfile will be in the logs. But inside a json string. The same way the log you shared have part of the caddyfile.
So think it's handling the reverse proxy string right now. I had an empty line afrer teh caddy.reverse_proxy line before it went to ports - which are probably not needed now. So now I get:
{"level":"info","ts":1657661635.2425275,"logger":"docker-proxy","msg":"Process Caddyfile","logs":"[ERROR] Removing invalid block: hostname appears in more than one automation policy, making certificate management ambiguous: *.internal.example.com\n*.internal.example.com {\n\t@photos host photos.internal.example.com\n\treverse_proxy @photos 172.29.0.3:2342\n}\n\n"}
I asume that's because I have both *.internal.example.com in my Caddyfile and in this compose for photoprisim:
labels:
- com.centurylinklabs.watchtower.enable=true
- caddy=*.internal.example.com
- caddy.@photos=host photos.internal.example.com
- caddy.reverse_proxy=@photos {{ upstreams 2342 }}
Do I need to reference that config in the caddyfile somehow? as that has the letsencrypt email address and the Cloudflare api key I need.
*.internal.example.com:443 {
tls letsencrypt@example.com {
dns cloudflare <token>
}
}
*.internal.example.com:443
should just be *.internal.example.com
, i.e. remove the port. If they don't match exactly, it'll be ambiguous for the parser to match them up and merge them. (The port is redundant anyways, since 443 is already the default unless you specified http://
instead)
Or you could just define the domain+TLS+DNS as labels, you don't need to do that as a file. Put those labels on the CDP service itself.
*.internal.example.com:443
should just be*.internal.example.com
, i.e. remove the port. If they don't match exactly, it'll be ambiguous for the parser to match them up and merge them. (The port is redundant anyways, since 443 is already the default unless you specifiedhttp://
instead)Or you could just define the domain+TLS+DNS as labels, you don't need to do that as a file. Put those labels on the CDP service itself.
Ahh think you had it there. I matched what was in the Caddyfile with the labels and then starting the photoprism container had no errors!! Will have ot verify it's all actually working now, I have a weird setup as it's a new server and everythign is behind a local pfsense...but thats my problem to work out.
Still a bit puzzled how I can view the current config. I think i found it in the autosav.json, but I assume that is when it's completed successfully.
Update: I've just confirmed that I can connect via the domain name - very cool. Thanks for all the help.
autosave.json
is the Caddyfile -> JSON adapted output (Caddy actually runs on a JSON config, the Caddyfile is a UX layer on top of that which produces JSON config).
The Caddyfile itself will be in Caddy's runtime logs (i.e. docker-compose logs caddy
) which the CDP plugin will print out. It'll have lots of \n
in it etc, unfortunately because it gets JSON encoded when printed to the logs... #276 talks about approaches we could make that easier though (hint hint @lucaslorentz if you have some time soon :joy:)
So I might finally be able to move off my caddyv1 implementation.
Be interesting to see what ansible does with those caddy directives the {{ upstream }} as {{ is the ansible designation for a variable
With ansible you would need to use "{{ '{{ upstreams 2342 }}' }}"
, which would result in a literal label value of {{ upstreams 2342 }}
.
So for example:
caddy: photos.internal.example.com
caddy.reverse_proxy: "{{ '{{ upstreams 2342 }}' }}"
## or alternatively
caddy.reverse_proxy: "{{ '{{' }} upstreams 2342 {{ '}}' }}"
@psyciknz WRT viewing the caddy config that is currently active, you can sh into the docker container that is running caddy and query the api directly i.e curl http:\localhost:2019\config - this gives you the live version of the caddy config
Or look inside your /config
volume which has both the JSON and Caddyfile.
I've got an existing caddy v1 instance where I reverse proxy sites on 3 different hosts in standalone docker config.
I'm looking at moving to ansible deployed, and if possible something that will reverse proxy based on inventory rather than a hard coded caddy file.
Is this method of operation supported?
Would this be docker instances on each host? And for caddy-docker-proxy to find everything, does each proxied service have to be in a single caddy network? or does it query the docker host for labels?