Closed gabrtv closed 9 years ago
I'm not comfortable enough dealing with chef but I took a look at https://github.com/coroutine/chef-nginx_ssl_proxy/blob/master/recipes/default.rb. Which deploys nginx with ssl termination and thought maybe I can use the same approach in https://github.com/opdemand/deis-cookbook/blob/master/recipes/nginx.rb. Then just create an encrypted databag called nginx_ssl_certs from the ssl certificate.
@enyachoke, that looks like a reasonable approach to fixing this. Thanks for tracking this down. We'll get to this at some point, but if you'd like to make a pull request we'd be all over it.
I believe we can use this approach.
secret
to the instance prior to running the recipe. This requires a few changes to the current provisioning script.knife-provider
needs to be broken down into the following steps
Thoughts?
I'm a little bit against the idea of doing this in a databag. This locks us into using chef for configuration management. We should be looking into a more universal option for other configuration management tools, however if we're more concerned with just putting out a v1.0 then we can use an encrypted databag for the time being. See #251 for more discussion on this.
I missed @shredder12's proposal. I like that idea better. :)
@shredder12 @bacongobbler assuming we scp'd the private key in step 3 (versus using an encrypted data bag), how would we use it in chef land?
While data bags aren't portable, they do offer a more secure way of performing key distribution, so I don't want to dismiss it w/out serious consideration, especially since we don't have another CM implementation yet.
@gabrtv @bacongobbler I think there is some mis-understanding. I'm not discarding the use of data bags. I am simply suggesting a way of sharing the encrypted data bag's secret with the node. The certs will be saved in the data bag, but a medium is still needed to securely share the secret, to decrypt the certs. My suggestion is scp.
Let me know if I'm missing something :)
Our use case has changed a bit, but there should be some configuration done to deis-router
to support ssl keys handed to the router via confd
.
Could you build a container with the key in it locally, and deploy it out to the servers with fleet? This would give control over the certificate content. Discourse does something similar to this.
Absolutely. You can use ADD commands inside the router's Dockerfile that would install the private and public keys into the router and edit the nginx config to specify which endpoints you want them to be installed upon (referring to #535).
Also, we have relevant work going on in #964, so I think it's time to revive this thread and give it another kick of the tires once #535 is done.
Updated title and OP to better reflect Deis-current.
Hey guys, my load balancers on Azure do not support SSL offloading. I am considering customizing the router image to support app certificates. Is there any active work on this feature? If not, what is the current thinking on what this should look like? I'd love to contribute.
@jparkerCAA sounds about right. We would like there to be a way for someone to add an SSL cert to their application similar to Heroku's client syntax which would then add support in the router. No active work has been done on this, though I added cluster-wide support for offloading SSL in 0.15: #2194
@bacongobbler sounds good. I'll jump in.
How about replacing the current router with something based on http://www.vulcanproxy.com/ ? It is a etcd backed loadbalancer that already has SNI based SSL support (and it already saves the keys in an encrypted form in the backend). We could avoid re-inventing the wheel there. And also avoid all the trickyness of configuring nginx in a way it does not break with special kinds of connections (timeouts, chunking, ...)
Sorry if I missed something in the current router that makes this impossible. I didn't look at the code yet.
@stefanfoulis it's missing other key features like tcp support, rolling updates, sticky session support, websocket support... All things that are required in the router today. nginx is something we cannot easily replace without losing some core functionality which users rely on.
Hoping to have a PR ready for review in a few days that includes the necessary router, controller, client, changes. Basic functionality is working but needs some polish and testing.
+1
SSL is complex, sadly enough I don't know the details. But from my perspective the following items need to be taken into account.
due to the HTTPS protocol limitations virtual servers should listen on different IP addresses....
This is not true in the actual version of nginx
but impose the restriction that the client must support SNI
Like I said, I am not an expert :)
only one point gone from my list though :(
Sorry, I've been off the grid for a long stretch. @nathansamson let me explain my approach to some of these points.
@nathansamson ah, I see your use case on the other issue https://github.com/deis/deis/pull/2799
I think the main thing that needs to change from your existing implementation in #2799 (although I didn't take an exact look) is to change the nginx configuration. So rather than just adding all domains in one server block add them one by one... But I think this might get rather complicated in the case of multidomain & wildcard certificates. Also it needs to be taken into account that some domains of an app won't have the certificate so should still listen on HTTP port 80 without redirect to HTTPs
Nathan
I've created an initial proposal at #2911 which (generally) follows https://devcenter.heroku.com/articles/ssl-endpoint. I'd really appreciate your input!
The controller currently supports HTTP and HTTPS cluster-wide by installing an SSL cert on the load balancer fronting Deis. We need to provide a way of distributing SSL keys to the router for applications.