Closed ejether closed 7 years ago
w00t! Thanks! However, I think we want to make sure we aren't creating any objects without helm - so in this case, the secret object would also need to be created by helm. This allows people to just use helm and get a working install (WIP docs at http://predictablynoisy.com/zero_to_jupyterhub/index.html!). So can you modify it so the secret object also gets created with helm templates?
Thank you so much for the patch!
I see where you are coming from on that and I think with the current docs they would be able to get up and running as is shown in those docs without using Kubernetes secrets.
However, if helm creates the secret, then it has to be in the config file or it will be overwritten/removed on the next helm upgrade
. If it has to be in the config file then there is no point in using Kubernetes secrets. I'd like to check the config file into git without secrets in it.
Do you have any thoughts on that?
The config file is in git, but we have two git repositories - one that can be public and has no secrets, and another that isn't public and has secrets. You can pass multiple yaml files to helm and it'll merge them all. We also pass the same files to each helm upgrade - you need to do that anyway, otherwise you're going to reset all your configuration to 'default' on each helm upgrade.
We want to make sure that each deployment we make is 100% traceable back to a git repository hash, and so the only way to do that is to check in your secrets to git. Splitting the yaml files into secret and not-so-secret makes it work ok.
I do agree that there's not much point in using kubernetes secrets for these objects at the moment outside of just 'code cleanliness'.
On Mon, Apr 10, 2017 at 10:58 AM, ejether notifications@github.com wrote:
I see where you are coming from on that and I think with the current docs they would be able to get up and running as is shown in those docs without using Kubernetes secrets.
However, if helm creates the secret, then it has to be in the config file or it will be overwritten/removed on the next helm upgrade. If it has to be in the config file then there is no point in using Kubernetes secrets. I'd like to check the config file into git without secrets in it.
Do you have any thoughts on that?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/pull/164#issuecomment-293028758, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23l3mbcGoSlRxhW7h5wZQPclogEOBks5rum2pgaJpZM4M5GAb .
-- Yuvi Panda T http://yuvi.in/blog
In that, case, my use case probably isn't in line with the way you want your project to function.
In light of that, and the fact that moving a couple of the configuration values of the ConfigMap and into secrets breaks the get_config()
method in the juptyerhub_config.py
I'll withdraw the pull request.
We'll probably migrate the secrets to secret objects soon either way, and I will look to reuse some of this work! Thank you for your time!
Am curious what your use case is that you want to manage secrets separately from helm, without having them checked into a git repository. If the secrets change (especially cookiesecret), then all users will be logged out. Are you using vault or some other method of getting secrets into k8s clusters?
On Mon, Apr 10, 2017 at 2:14 PM, ejether notifications@github.com wrote:
Closed #164 https://github.com/data-8/jupyterhub-k8s/pull/164.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/pull/164#event-1037247222, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23uERoK9HQgk1NvrsxWEdp7B8U_nxks5rupupgaJpZM4M5GAb .
-- Yuvi Panda T http://yuvi.in/blog
The whole use case it to keep secrets out of git. And for a highly collaborative environment I’d rather not have the secrets on everyone's workstation. In other projects with Kube, I’ve put the secrets in place and left them alone so they take no management after that (unless we need to rotate them, but again, it only needs to be done once) We are looking at using vault but haven’t yet.
Essentially, in order to simply management of my dev,staging,prod pipeline, I’ll have a dev/staging/prod config yaml. The dev/staging configs may or may not have secrets in them, but the prod definitely will not. I don’t want a two repositories, but by putting production secrets in kubernetes they are available. If they are created by helm, then the config needs to be available to helm every time an upgrade is done, which defeats the purpose of moving the secrets out of git…
I’m happy my work could be helpful to you in the future!
Does the config mount include the secrets?
Also, incase you are not aware, as I was not, there is a base64Encode
filter in go templates.
On Apr 10, 2017, at 2:17 PM, Yuvi Panda notifications@github.com<mailto:notifications@github.com> wrote:
We'll probably migrate the secrets to secret objects soon either way, and I will look to reuse some of this work! Thank you for your time!
Am curious what your use case is that you want to manage secrets separately from helm, without having them checked into a git repository. If the secrets change (especially cookiesecret), then all users will be logged out. Are you using vault or some other method of getting secrets into k8s clusters?
On Mon, Apr 10, 2017 at 2:14 PM, ejether notifications@github.com<mailto:notifications@github.com> wrote:
Closed #164 https://github.com/data-8/jupyterhub-k8s/pull/164.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/pull/164#event-1037247222, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23uERoK9HQgk1NvrsxWEdp7B8U_nxks5rupupgaJpZM4M5GAb .
-- Yuvi Panda T http://yuvi.in/blog
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHubhttps://github.com/data-8/jupyterhub-k8s/pull/164#issuecomment-293081902, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AAp5zFz6ppyHRqaUvKmSKpFV3irYOdtgks5rupyFgaJpZM4M5GAb.
Right. I do strongly believe that everything should be managed by helm, and mixing things managed by helm with things not managed by helm will lead to unhappiness long term.
That said, I totally understand that having two git repositories might be too much overhead for a large number of people, and there needs to be a better solution. But whatever it is should also definitely be completely deterministically reproducible from a git commit sha too. We should be able to tear down a k8s cluster, bring a new one up, and deploy the exact same deployment we had earlier - and if we don't version control secrets we can't do that either. We haven't thought a lot about how to do that yet - suggestions / thoughts welcome!
We have the exact same thing you suggested (dev, prod config), but the entire repo is private because of the secrets issue. We deploy from a provisioner on GKE, so secrets aren't on users' laptops. This is unideal, but better than losing the reproducibility of the setup (which has saved us many times already). We'll probably scrub and make our repo available publicly next week :)
helm adds a b64enc
filter to its go template context so you can use that
for doing base64 conversions.
On Mon, Apr 10, 2017 at 2:54 PM, ejether notifications@github.com wrote:
The whole use case it to keep secrets out of git. And for a highly collaborative environment I’d rather not have the secrets on everyone's workstation. In other projects with Kube, I’ve put the secrets in place and left them alone so they take no management after that (unless we need to rotate them, but again, it only needs to be done once) We are looking at using vault but haven’t yet.
Essentially, in order to simply management of my dev,staging,prod pipeline, I’ll have a dev/staging/prod config yaml. The dev/staging configs may or may not have secrets in them, but the prod definitely will not. I don’t want a two repositories, but by putting production secrets in kubernetes they are available. If they are created by helm, then the config needs to be available to helm every time an upgrade is done, which defeats the purpose of moving the secrets out of git…
I’m happy my work could be helpful to you in the future!
Does the config mount include the secrets? Also, incase you are not aware, as I was not, there is a
base64Encode
filter in go templates.On Apr 10, 2017, at 2:17 PM, Yuvi Panda <notifications@github.com<mailto: notifications@github.com>> wrote:
We'll probably migrate the secrets to secret objects soon either way, and I will look to reuse some of this work! Thank you for your time!
Am curious what your use case is that you want to manage secrets separately from helm, without having them checked into a git repository. If the secrets change (especially cookiesecret), then all users will be logged out. Are you using vault or some other method of getting secrets into k8s clusters?
On Mon, Apr 10, 2017 at 2:14 PM, ejether <notifications@github.com<mailto: notifications@github.com>> wrote:
Closed #164 https://github.com/data-8/jupyterhub-k8s/pull/164.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/pull/164#event-1037247222, or mute the thread https://github.com/notifications/unsubscribe-auth/ AAB23uERoK9HQgk1NvrsxWEdp7B8U_nxks5rupupgaJpZM4M5GAb .
-- Yuvi Panda T http://yuvi.in/blog
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHubhttps://github.com/ data-8/jupyterhub-k8s/pull/164#issuecomment-293081902, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ AAp5zFz6ppyHRqaUvKmSKpFV3irYOdtgks5rupyFgaJpZM4M5GAb.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/pull/164#issuecomment-293090463, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23hyj-MGnd8Taxsatbtuwhj2qVkblks5ruqUhgaJpZM4M5GAb .
-- Yuvi Panda T http://yuvi.in/blog
I see your point about helm and more or less agree. It seems like a game of "pick where you are unhappy" In this case, I see the secrets are separate from the code and being deterministically reproduce the environment from a git hash should not be tied at all to the content of a secret. I think we see the same problem, but have different opinions about it. I do agree that a manually managed secret in kubernetes is not a complete solution.
I'm afraid I don't have any better ideas at this point, other than an outside service in which secrets are managed, but that will violate your git ref stance. As long as you want the same secrets 100% tied to a git ref, the best way will be to keep the secrets in the same git repository.
Thanks!
https://github.com/kubernetes/helm/issues/2196 seems to be upstream work in helm to handle secrets better.
On Mon, Apr 10, 2017 at 3:04 PM, Yuvi Panda yuvipanda@gmail.com wrote:
Right. I do strongly believe that everything should be managed by helm, and mixing things managed by helm with things not managed by helm will lead to unhappiness long term.
That said, I totally understand that having two git repositories might be too much overhead for a large number of people, and there needs to be a better solution. But whatever it is should also definitely be completely deterministically reproducible from a git commit sha too. We should be able to tear down a k8s cluster, bring a new one up, and deploy the exact same deployment we had earlier - and if we don't version control secrets we can't do that either. We haven't thought a lot about how to do that yet - suggestions / thoughts welcome!
We have the exact same thing you suggested (dev, prod config), but the entire repo is private because of the secrets issue. We deploy from a provisioner on GKE, so secrets aren't on users' laptops. This is unideal, but better than losing the reproducibility of the setup (which has saved us many times already). We'll probably scrub and make our repo available publicly next week :)
helm adds a
b64enc
filter to its go template context so you can use that for doing base64 conversions.On Mon, Apr 10, 2017 at 2:54 PM, ejether notifications@github.com wrote:
The whole use case it to keep secrets out of git. And for a highly collaborative environment I’d rather not have the secrets on everyone's workstation. In other projects with Kube, I’ve put the secrets in place and left them alone so they take no management after that (unless we need to rotate them, but again, it only needs to be done once) We are looking at using vault but haven’t yet.
Essentially, in order to simply management of my dev,staging,prod pipeline, I’ll have a dev/staging/prod config yaml. The dev/staging configs may or may not have secrets in them, but the prod definitely will not. I don’t want a two repositories, but by putting production secrets in kubernetes they are available. If they are created by helm, then the config needs to be available to helm every time an upgrade is done, which defeats the purpose of moving the secrets out of git…
I’m happy my work could be helpful to you in the future!
Does the config mount include the secrets? Also, incase you are not aware, as I was not, there is a
base64Encode
filter in go templates.On Apr 10, 2017, at 2:17 PM, Yuvi Panda <notifications@github.com<mailto: notifications@github.com>> wrote:
We'll probably migrate the secrets to secret objects soon either way, and I will look to reuse some of this work! Thank you for your time!
Am curious what your use case is that you want to manage secrets separately from helm, without having them checked into a git repository. If the secrets change (especially cookiesecret), then all users will be logged out. Are you using vault or some other method of getting secrets into k8s clusters?
On Mon, Apr 10, 2017 at 2:14 PM, ejether <notifications@github.com<mail to:notifications@github.com>> wrote:
Closed #164 https://github.com/data-8/jupyterhub-k8s/pull/164.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/pull/164#event-1037247222, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23uERo K9HQgk1NvrsxWEdp7B8U_nxks5rupupgaJpZM4M5GAb .
-- Yuvi Panda T http://yuvi.in/blog
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHubhttps://github.com/data -8/jupyterhub-k8s/pull/164#issuecomment-293081902, or mute the thread< https://github.com/notifications/unsubscribe-auth/AAp 5zFz6ppyHRqaUvKmSKpFV3irYOdtgks5rupyFgaJpZM4M5GAb>.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/pull/164#issuecomment-293090463, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23hyj-MGnd8Taxsatbtuwhj2qVkblks5ruqUhgaJpZM4M5GAb .
-- Yuvi Panda T http://yuvi.in/blog
-- Yuvi Panda T http://yuvi.in/blog
Cool, that looks like pretty good work!
Closes #163
Summary
Updated the hub.yml, proxy.yml and cull.yml chart to allow for the deployment to use kubernetes secrets when secrets and token are not included in the config.yml file. Updated the values.yml to have null values for lookup purposes.
How to test
1: Remove token/cookie secret from config file 2: Create a kubernetes secret name 'hub-secret' with the same keys as would be in the ConfigMap 3: Upload secret file 4: Run
helm upgrade
to get new config 5: Restart Pods if necessary 6: Check environment variable on running pods to ensure they are running correctly