Closed grahamedgecombe closed 7 years ago
It doesn't work for non-default vhosts if you're using SNI.
Still looking into it, it looks like SSL_EXT_FLAG_RECEIVED
is somehow being reset between the calls to custom_ext_parse
and custom_ext_add
.
Any news about the issue and an expectation when it will be fixed?
I think it's a bug in OpenSSL: it looks like 1.1.0 invokes the SNI callback after SSL_EXT_FLAG_RECEIVED
is set. nginx's SNI handler then wipes this out as a side effect of calling SSL_set_SSL_CTX
, so when custom_ext_add
is called against the replacement context it doesn't know that the client asked for the SCT extension.
I assume OpenSSL 1.0.2 invokes the SNI callback before setting SSL_EXT_FLAG_RECEIVED
, but I still need to double check this.
I have confirmed this on my server. 1.0.2j could work but 1.1.0b couldn't.
I just came across this module and I'm having the same problem with OpenSSL 1.1. Is there a bug filed with OpenSSL regarding this? I took a look at their github issue tracker and didn't seem to find anything relevant.
I took a deeper look at the OpenSSL code and confirmed that SSL_set_SSL_CTX is wiping out the list of supported client extensions. It seems the TLS extension handling code was significantly refactored in 1.1.0 which is likely why the problem only shows up now.
Thanks for reporting it to openssl.
In the meantime, this is a very hacky patch I've been using to work around the issue:
diff --git a/ssl/ssl_lib.c b/ssl/ssl_lib.c
index bd0fbf810..61870eee7 100644
--- a/ssl/ssl_lib.c
+++ b/ssl/ssl_lib.c
@@ -3385,6 +3385,11 @@ SSL_CTX *SSL_set_SSL_CTX(SSL *ssl, SSL_CTX *ctx)
if (new_cert == NULL) {
return NULL;
}
+ custom_exts_free(&new_cert->srv_ext);
+ if (!custom_exts_copy(&new_cert->srv_ext, &ssl->cert->srv_ext)) {
+ ssl_cert_free(new_cert);
+ return NULL;
+ }
ssl_cert_free(ssl->cert);
ssl->cert = new_cert;
FYI the patch will need updating for openssl 1.1.1.
... and got it working:
--- openssl-orig/ssl/ssl_lib.c 2017-04-25 16:27:59.187224235 +0000
+++ openssl/ssl/ssl_lib.c 2017-04-25 16:29:04.400062597 +0000
@@ -3593,6 +3593,11 @@
if (new_cert == NULL) {
return NULL;
}
+ custom_exts_free(&new_cert->custext);
+ if (!custom_exts_copy(&new_cert->custext, &ssl->cert->custext)) {
+ ssl_cert_free(new_cert);
+ return NULL;
+ }
ssl_cert_free(ssl->cert);
ssl->cert = new_cert;
This has been fixed upstream :) https://github.com/openssl/openssl/commit/21181889d78f95f10738813285f681acd3b32c6c
Confirmed. Thank you!
OpenSSL 1.1.0f has been released with the fix. I've documented the bug in the README, so I think this can be closed now.
Has anyone successfully got this working now that OpenSSL 1.1.0f is available? I'm encountering the same behavior as previous versions where the CT extension isn't appearing when a non-default server is used. I've confirmed nginx is built with the proper OpenSSL version, and I'm using the latest git master of nginx-ct.
nginx version: nginx/1.13.1
built by gcc 4.9.2 (Debian 4.9.2-10)
built with OpenSSL 1.1.0f 25 May 2017
TLS SNI support enabled
Server block contains the following:
listen 192.99.17.134:443 ssl http2;
listen [2607:5300:60:4286:1183:d3a4:263c:fb36]:443 ssl http2 default reuseport;
server_name r1ch.net www.r1ch.net;
ssl_ct on;
ssl_certificate /etc/nginx/ssl/r1ch.net/rsa/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/r1ch.net/rsa/key.pem;
ssl_ct_static_scts /etc/nginx/ssl/r1ch.net/rsa/ct;
ssl_certificate /etc/nginx/ssl/r1ch.net/dsa/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/r1ch.net/dsa/key.pem;
ssl_ct_static_scts /etc/nginx/ssl/r1ch.net/dsa/ct;
Can be verified by testing those IPs directly:
openssl s_client -ct -tls1_2 -servername r1ch.net -connect 192.99.17.134:443
openssl s_client -ct -tls1_2 -servername r1ch.net -connect '[2607:5300:60:4286:1183:d3a4:263c:fb36]:443'
I understand you built against OpenSSL 1.1.0f, but are you absolutely positive that you are are running against 1.1.0f? Did you build OpenSSL statically?
Which libssl/libcrypto .so files is nginx pointing to, and are they 1.1.0f?
$ ldd `which nginx` | grep -e crypto -e ssl
libssl.so.1.0.0 => /lib/i386-linux-gnu/libssl.so.1.0.0 (0xb75ef000)
libcrypto.so.1.0.0 => /lib/i386-linux-gnu/libcrypto.so.1.0.0 (0xb7443000)
$
$ ls -l /lib/i386-linux-gnu/libssl.so.1.0.0
-rw-r--r-- 1 root root 358564 Jan 30 21:39 /lib/i386-linux-gnu/libssl.so.1.0.0
$ ls -l /lib/i386-linux-gnu/libcrypto.so.1.0.0
-rw-r--r-- 1 root root 1738900 Jan 30 21:39 /lib/i386-linux-gnu/libcrypto.so.1.0.0
$
I ma running nginx 1.12.0 with OpenSSL 1.1.0f and it works fine with many virtual hosts. Make sure you have restarted nginx, reload will not work as it will still use old library after reload.
@lukastribus I am definitely dynamically linking against OpenSSL 1.1.0f, I double checked this using ldd and strings to verify "OpenSSL 1.1.0f" was present in the libssl library. I also checked the entry under /proc/x/maps to make sure it hadn't somehow loaded a different library.
@rraptorr I did a complete stop / start just to be 100% sure, it's definitely using the new libraries.
I have tried with nginx-ct v1.3.2 with the same result. I've also tried downgrading to nginx 1.11.13 (which should match 1.12.0) in case the SSL renegotiation / TLS 1.3 support in 1.13 series changed things, but this also had no effect.
Maybe it has something to do with you using RSA/ECDSA cert combo? I am using only one cert type per vhost (some RSA and some ECDSA). Maybe try to temporarily disable one of the certs?
Figured out the problem. My default host wasn't using ssl_ct, which seems to cause all non-default hosts to also lose support. I wonder if this is a technical limitation with CT / TLS extensions or an issue with nginx-ct.
Is not it already compatible?