Closed pedrolamas closed 2 months ago
Thanks for opening an issue, I or someone will look at this soon!
For the record, I managed to bypass this issue by running two separate Caddy Docker containers, one for the internal stuff (using the auto-generated wildcard certificate), and the other for the external (with the Cloudflare certificate)
Thanks for opening an issue, I or someone will look at this soon!
Any news on this? I current plan to host services on port 443 with a, lets say letsencrypt wildcard cert, and other services (also on port 443) with my own certs so hosting 2 caddy instances sadly isn't an option.
@amy1337 This issue is pretty old by now, could you try with the latest Caddy version (ideally the latest 2.8 beta) and show us your config and corresponding curl -v
commands that demonstrate what is not working as expected.
I think I have a similar problem: tls
directive from one site block leaks to another blocks. I want to use wildcard certificate but my DNS hosting doesn't support plugins for auto renewing, so I requested certificates through certbot and copied them to caddy folder.
My config is:
{
auto_https disable_certs
}
service1.example.com {
tls /etc/caddy/certs/fullchain.pem /etc/caddy/certs/privkey.pem
respond "ok1"
}
service2.example.com {
# no tls option
respond "ok2"
}
When I curl either https://service1.example.com
or service2 they both use wildcard certificate from the tls
option in service1 block. Is this behavior intentional? If yes then why tls
directive in not global?
PS: caddy 2.8.4 on Arch Linux
PPS: Docs page for tls
directive never mentions this implicit behavior.
both use wildcard certificate from the
tls
option in service1 block. Is this behavior intentional?
I don't think it is intentional. You should get a better idea with caddy adapt
to view the JSON that Caddyfile is converted to.
There is a WIP feature PR that is meant to get the behaviour you describe from a wildcard site address becoming the priority over provisioning new certs for subdomains that the wildcard could be used for.
In that PR they state if you have a single domain you don't want to use wildcard with, you would not use the feature. But since you've relied on the tls
directive that is externally provisioned cert rather than managed by Caddy, the behaviour you get is different AFAIK, so you've triggered a bug I assume (at least with the Caddyfile logic), and I don't think we have tls acme
to explicitly configure the default cert provisioning logic 🤔 (which would probably resolve the linked PR use-case too where a user wants some site blocks to ignore the wildcard and provision a new cert).
EDIT: As pointed out below, I misunderstood the auto_https
mode.. disable_certs
instructed Caddy not to provision certs for a site address, so it found and used the one you manually loaded from an external source that was a valid match.
@vehlwn I think in your case, you want auto_https ignore_loaded_certs
, not disable_certs
. See https://caddyserver.com/docs/caddyfile/options#tls-options
The reason the Caddyfile's tls
was designed as a directive in sites is because it's convenient to be able to associate the site addresses (domains) with the TLS automation policies (i.e. how to get a cert for the domains of that site) or TLS connection policies (if you need client auth, requiring clients to pass a certificate). If it was global then you'd have to make that association yourself (i.e. duplicate your domain in the config). The TLS layer is "global" though (it's an "app", as @polarathene said you can adapt your config to JSON to get an idea of how it actually looks with caddy adapt -p
). The auto_https
modes are there to provide escape hatches for that automatic wiring behaviour.
ignore_loaded_certs
manages certificates automatically. I don't want caddy to issue different certificates for each domain because I don't want all my domains to be publicly available for hackers to scrap them. See https://crt.sh/?q=archlinux.org for example.
In nginx ssl_certificate
option can be used outside any server
block so multiple ssl
servers can inherit it.
I don't understand what your complaint is about then. Just use tls <cert> <key>
as you already were.
I don't want caddy to issue different certificates for each domain because I don't want all my domains to be publicly available for hackers to scrap them.
That is why you use the wildcard certificate.
In nginx
ssl_certificate
option can be used outside anyserver
block so multiplessl
servers can inherit it.
So your actual complaint is not about the wildcard cert being used, but why each site block has tls
directive instead of a global directive?
This was explained by looking at the JSON output.
tls
directive in each block, you can use a import <snippet-name-here>
to make it easier to manage):443
without any FQDN if I recall.handle
directive for each subdomain.I'll close this as inactive. I'm not sure there's anything actionable here.
I want to caddy to generate a wildcard certificate
*.example.com
, and then use that for multiple hosts (likehomeassistant.example.com
andother.example.com
) on port 443.I also want to have
homeassistant-external.example.com
on port 21443 so I can use a manually set certificate.Basically, I want that when I access
https://homeassistant.example.com:443
the certificate used is the auto-generated wildcard one, and when I accesshttps://homeassistant-external.example.com:21443
it uses the supplied certificate instead.The problem is the moment I add the
:21443
block, it will always pick up that certificate for both:443
and:21443
, and ignore the auto-generated one!I have this setup working fine under nginx, but I haven’t been able to do it with caddy…
(Note: this is a follow up on https://caddy.community/t/how-to-use-custom-certificates-with-wildcard-generated-ones/17808/1)
docker-compose.yml
Dockerfile
Caddyfile
Log entries