chromium / badssl.com

:lock: Memorable site for testing clients against bad SSL configs.
https://badssl.com
Apache License 2.0
2.83k stars 191 forks source link

Setting up publicly-available badssl.com instance #492

Closed staticfloat closed 2 years ago

staticfloat commented 2 years ago

Hello there,

We've been using badssl.com in the CI runs for the Downloads.jl package, but we started to get HTTP 401 errors and figured this might be because you guys ask us not to use this for CI. Sorry, our bad for not reading the README thoroughly enough.

I followed the instructions to create a badssl.julialang.org that we can use for our CI needs, but I'm a little confused about how to replicate the setup you have on badssl.com. In particular, because you seem to be generating all of the certificates yourself, our worker machines (and other users that want to run the Downloads.jl tests) won't be able to use the certificates out of the box.

Is there some way to make use of a wildcard certificate or something similar to base the "normal" certificates off of a CA that is not self-generated? How is this handled on the main badssl.com server?

Thanks!

christhompson commented 2 years ago

Always happy to help advise on standing up local instances! Yeah, the badssl.com site doesn't necessarily have the guarantees you'd want for pointing CI against it (mostly: if something breaks you can't fix it yourself), and we don't have a good way to track downstream consumers of the site to notify people about things like server downtime.

The easiest way to do this would probably be to use the "test" server, which generates its own CA and creates all of the certificates locally. This isn't exactly a match of real world behavior (some TLS libraries and user agents behave slightly differently against certificates that chain to a user-installed root certificate), but it avoids a lot of the maintenance burden of acquiring and renewing certificates (which is where I tend to cause breakage). The downside is you'd need to configure your CI bots to add the root certificate to their trust stores, but I think that shouldn't be too bad (hopefully -- never done this myself though). The upside is server setup is fast (and has Docker support) and also it includes more test cases than we can get public certs for (e.g., stuff that would violate the BRs).

The way we run the public badssl.com site with publicly-trusted certificates is... very manual right now. We generously get most of our certificates provided for free, but we still have to go generate the CSRs, request the certificates from the CA, and add them to our "production overlay" which fills them into the certs/sets/prod/pregen folders -- then the cert generation scripts copy the pre-existing publicly-trusted certificate chains instead of creating new self-signed ones.

This quarter I'm working on transitioning at least some of the certificates over to ACME. If we could get all certificate types moved over to be provisioned through ACME, then the scripts could take your new "base domain" and get certs under that domain instead. But completely transitioning things to automatic provisioning is likely quite a ways off, and there are quite a few of our test cases that may be impossible to provision via ACME.

staticfloat commented 2 years ago

Thanks for the detailed answer! After reviewing the options, we decided to just recreate the small number of testing endpoints manually using some manipulation of the docker-nginx-certbot docker image. With this setup, we can get automatic certs delivered/generated for the apex domain, wrong.host and untrusted-root, which are the three test cases we're most interested in.