Valian / docker-nginx-auto-ssl

Docker image for automatic generation of SSL certs using Let's encrypt and Open Resty
https://hub.docker.com/r/valian/docker-nginx-auto-ssl/
MIT License
405 stars 101 forks source link

Is there an explanation of how this works somewhere? Blog post, etc ...? #53

Closed polterguy closed 2 years ago

polterguy commented 3 years ago

I believe this sits as a proxy nGinx instance in front of my own images, and once HTTPS is requested, automatically retrieves a LetsEncrypt/certbot/acme SSL certificate/key for me. However, I would love to get more information about my assumptions, such as did somebody ever go through and analyse what it does - Such as for instance overhead of this approach, compared to having the site exposed directly (over nGinx), instead of this tunnelling approach, etc, etc, etc ...

What happens to the original IP address, is it lost - If so, how do I add it?

Adorable work though I must confess - There should have been a "debug switch" in it though, allowing me to use the same docker-compose file when debugging (minus one switch) as I use during deployments ...

However, the latter may just be me not having configured my docker-compose file correctly ...

I've gone through the process of adding certbot to existing nGinx sites and other sites running Docker, and it's simply ridiculously complex. If this works the way I think it works, it's really quite brilliant I must confess :)

Valian commented 2 years ago

Hi @polterguy!

In this repo I'm using a great package https://github.com/auto-ssl/lua-resty-auto-ssl. All I've done is making it super easy to use :)

What happens to the original IP address, is it lost - If so, how do I add it?

Certificate doesn't contain IP address. If you still have it on a disk and migrate it to another machine / ip address, it still should work. If not, it will be regenerated automatically ;)

There should have been a "debug switch" in it though, allowing me to use the same docker-compose file when debugging (minus one switch) as I use during deployments ...

Locally you probably don't need HTTPS at all, HTTP usually is enough. I usually used two nginx services.

  1. This one, present only in production, responsible for terminating HTTPS and routing traffic to the next nginx (think about it as Ingress in kubernetes).
  2. Second one, present in development and in production, responsible for static files handling, project-specific routing etc.

This way it's very easy to setup multiple such projects on a single box, all I had to do was to update SITES env var in the internet-facing nginx.


A short walktrough of what's going on here.

There's more to it, eg locks across all workers to only generate one certificate for a domain at a time, upload of the certificate to shared storage if configured, checking if domain is whitelisted, communication with Let's Encrypt etc. But all in all, it's fairly efficient and shouldn't add any noticeable overhead to nginx.

Probably I'll add it to README, it might be handy :)