Open discobean opened 8 years ago
I just wrote a check for that, only to find out that it already fails pretty early on, on:
ssl_certificate(): auto-ssl: could not determine domain for request (SNI not supported?) - using fallback -
When a browser does a request to an IP-address, it does not send an SNI-header, so "domain" won't even be set.
I'm not sure why mine was sending a request to lets encrypt that was then sending an error about registering an SSL cert with an IP :S
@discobean: Do you have the specific error message that happens when you've seen this?
@GUI I sometimes get the same error. My guess this is because the clients (which probably are scanner bots) are sending Host header that equals to target IP address, eg:
2018/02/20 02:10:44 [warn] 490#490: *2723 [lua] init_by_lua:11: allow_domain(): auto-ssl: debug allow_domain domain XXX.XXX.XX.XX, context: ssl_certificate_by_lua*, client: 107.170.236.113, server: 0.0.0.0:8443
2018/02/20 02:10:49 [error] 490#490: *2723 [lua] lets_encrypt.lua:41: issue_cert(): auto-ssl: dehydrated failed: env HOOK_SECRET=XXXXX HOOK_SERVER_PORT=8999 /usr/local/bin/resty-auto-ssl/dehydrated --cron --accept-terms --no-lock --domain XXX.XXX.XX.XX --challenge http-01 --config /etc/resty-auto-ssl/letsencrypt/config --hook /usr/local/bin/resty-auto-ssl/letsencrypt_hooks status: 256 out: # INFO: Using main config file /etc/resty-auto-ssl/letsencrypt/config
Processing XXX.XXX.XX.XX
+ Signing domains...
+ Creating new directory /etc/resty-auto-ssl/letsencrypt/certs/XXX.XXX.XX.XX ...
+ Generating private key...
+ Generating signing request...
+ Requesting authorization for XXX.XXX.XX.XX...
err: + ERROR: An error occurred while sending post-request to https://acme-v01.api.letsencrypt.org/acme/new-authz (Status 400)
Details:
{
"type": "urn:acme:error:malformed",
"detail": "Error creating new authz :: Issuance for IP addresses not supported",
"status": 400
}
, context: ssl_certificate_by_lua*, client: 107.170.236.113, server: 0.0.0.0:8443
2018/02/20 02:10:49 [error] 490#490: *2723 [lua] ssl_certificate.lua:97: issue_cert(): auto-ssl: issuing new certificate failed: dehydrated failure, context: ssl_certificate_by_lua*, client: 107.170.236.113, server: 0.0.0.0:8443
2018/02/20 02:10:49 [error] 490#490: *2723 [lua] ssl_certificate.lua:286: auto-ssl: could not get certificate for XXX.XXX.XX.XX - using fallback - failed to get or issue certificate, context: ssl_certificate_by_lua*, client: 107.170.236.113, server: 0.0.0.0:8443
where XXX.XXX.XX.XX
is our actual server IP address
I'm all for blocking this in the code, but, eh: Your allow_domain() should only allow FQDN's that you will want a certificate to be generated for (whitelist approach) as opposed to "not for these" (blacklist approach).
You should never simply just "return true" in allow_domain(). It puts strain on your system and accumulate failures towards LE.
We're going a step further, though. We export a list of all our webhosting customers to a Lua table file on disk, read that and store it in a SHM key/value store inside nginx so we can easily check which hosts we should be generating certs for or not.
We have multiple wildcard certs for our management hostnames, so we exit early with a replaced SSL chain with the corresponding SSL cert in it. We early exit on IP-addresses, and localhost. And then lastly we check the DNS and whether or not the name is actually in our list of domains that are configured on our systems.
Most of these checks are only wins in time, but they're not really necessary. Point being: Use a whitelist instead of a blacklist. It's common code practice, too. Better safe than sorry.
And sometimes it's better not to over-engineer and skip various checks which could be yet another point of failure :) Especially if you have a highly dynamic environment.
I'm not saying this should be blocked in lua-resty-auto-ssl, just sharing what I've found out while testing if anyone else stumbles upon this issue.
Quick hack in allow_domain
:
local ip_chunks = {domain:match("(%d+)%.(%d+)%.(%d+)%.(%d+)")}
if (#ip_chunks == 4) then
return false
end
Whitelisting is not over-engineering, it's doing it right. There are so many security incidents in the world because developers are using blacklisting instead of whitelisting, it's not even remotely funny.
Your check also skips IPv6-addresses, and is actually over-engineered:
if string.match(domain, "(%d+).(%d+).(%d+).(%d+)") or string.find(domain, ":", 1, true) then
return false
end
You don't know our situation, but you quick to judge. Point taken.
You could probably create a PR and change the allow_domain
in the README.md to something more sensible than returning true
to make everyone safer
Considering LetsEncrypt doesn't issue certs on IPs, it would be great to detect and ignore IP addresses before sending a request to LetsEncrypt, rather than having to add that logic into the allow_domain configuration/function.