Closed jstrachan closed 9 years ago
In the very early days, we talked about adding this feature. The idea is still on the table, but we decided that for the first release we would focus on the ins and outs of package management.
For now, we opted for the workflow helm fetch
-> edit files -> helm install
. We are hoping to gain some insights into how people use Helm, and what their needs are.
So please keep feeding ideas about this particular feature. We'd love to start accumulating a lot of user stories, and then figure out whether what we need is something like env var expansion, or more like Go templates... or even something like a full-on scriptable preprocessor.
I don't mind how the Nulecule spec is handling parameters - https://github.com/projectatomic/nulecule/tree/master/spec#parameters-object.
It means the install process can prompt for parameter values (or provide an prepopulated answers file). Less need to edit/fork the charts but I would imagine its closer to a full preprocessor.
@hunter agreed. OpenShift Templates have the same kinda metadata (default values / required etc).
Though I think helm should work ideally with upstream kubernetes metadata really; so we'd need something like this to get merged upstream.
@technosophos we've been looking at how we might bring Deployment Manager templates into helm, as an alternate type of Chart package content. See the design document for an overview and the README for some quick examples.
FWIW. we've simplified the getting started process, so bootstrapping is no longer required.
@jackgr Thanks! We've been looking through Deployment Manager (and all of the various template proposals). I'd love to hear ideas about how Deployment Manager's model might fit into or align with the Helm model.
Same goes for ideas about Nulecule, @hunter and @jstrachan. I talked in person to @dustymabe about this last week, and he got me oriented.
@technosophos Sounds good. Let's talk in person. I'll follow up by email.
@technosophos @jackgr some of us were actually talking about helm today. @vpavlin has actually been looking at helm and we'd like to possibly coordinate efforts in the future. We'll probably try to touch base sometime next week.
Just posted a side-by-side comparison of dm and helm on the Google group for sig-config. PTAL.
@dustymabe We should meet at KubeCon with @technosophos.
We'll be at our booth! Come drop by any time :)
Incidentally the Config Resource Proposal looks to be another approach to adding parameters and using templates to resolve them: https://github.com/kubernetes/kubernetes/pull/6477
So rather than creating templates with parameters; the Config Resource proposal looks like we'd be able to mount a ConfigData resource either as a file on a volume in pods or to reference configuration values from the ConfigData as environment variables.
What I particularly like about this approach is that all of the other kubernetes resources then become simple, static and reusable in all environments as is (without any post processing etc), We'd also avoid endless debates on the ideal template language too - all we'd need is a canonical expression to refer to values in the ConfigData when binding the environment variable values (kinda like we can do right now for namespace and pod names etc):
"name" : "MY_NAMESPACE",
"valueFrom" : {
"fieldRef" : {
"fieldPath" : "metadata.namespace"
}
}
Then from a helm/dm perspective; upgrading a Chart would just mean switching to the new Replication Controller, Deployment Config, Service and other resources; as the ConfigData would be separate and kept in each environment/namespace.
This would keep the package manager's life simpler; they'd generally need to create a ConfigData first time things are installed and then upgrades/downgrades would be simpler; separating the namespace/environment/installation specific data (ConfigData) from the generic & shared data (all the other kubernetes resources).
I guess one thing we may need to look at longer term is if a version of a Chart introduced new ConfigData values which don't exist in an environments ConfigData values, we may have to mutate it to add new default values or something. But that shouldn't be too hard I guess.
@jackgr @technosophos I could't make it to kubecon this week but maybe we can set up something soon? @technosophos I believe @vpavlin reached out to you already?
So rather than creating templates with parameters; the Config Resource proposal looks like we'd be able to mount a ConfigData resource either as a file on a volume in pods or to reference configuration values from the ConfigData as environment variables.
This is ideal. I'll go a step further and say keeping Chart/DM manifests static seems like a reasonable design goal for ConfigData
+ Secrets
.
Another use-case I hope to solve is the ability to SIGHUP
amongst pod containers on ConfigData
and Secrets
changes, something close to @kelseyhightower's heart. I believe this may be pending docker/docker#10163.
I guess one thing we may need to look at longer term is if a version of a Chart introduced new ConfigData values which don't exist in an environments ConfigData values, we may have to mutate it to add new default values or something.
This is a tricky bit for sure. Given how substantially the config "contract" can change release-to-release, seems like we'd be better off warning the user and prompting them with something like:
..not unlike how OS package managers like apt
function today.
@technosophos Talked to @gabrtv at KubeCon today. I think we're agreed that #6477 solves the last mile configuration problem, but doesn't help with deploying an application.
Is this discussion related to https://github.com/helm/helm/issues/309, or are these two mutually exclusive and solve two different use cases?
Essentially, yes. Closing this one.
one thing that OpenShift added as an extension a while ago is OpenShift Templates which basically let you provide a list of parameter value names and default values; you can then refer to the parameters using a template expression
${FOO_BAR}
to expand things like environment variables in, say, a Pod template in a Replication Controller with environment specific things (like host/domain or login/passwords or something).Its pretty easy to evaluate the templates locally to turn the JSON into a vanilla kubernetes List of resources by letting the user override parameter values on the CLI or using defaults etc.
It would be nice to have something like this in helm.
Unfortunately this hasn't yet made it upstream into kubernetes yet.
To give an example, here's an OpenShift template that runs the gogs git hosting server. To instantiate it we need to tell gogs what domain name its going to be exposed at so that its UI can show the right git clone URL etc. e.g. if you look at the
parameters
there's a parameter calledGOGS_SERVER_DOMAIN
which represents the server host domain.What would be nice is when installing something via helm you got the chance to override parameters (if there are any) manually; then those parameter values are remembered by writing them to a yaml/json file locally. Then if the user upgrades to a new version of the Chart, it would re-apply those previously used parameters by default. Then folks can move forward/backwards in versions of the Chart while keeping the environment/namespace specific parameter values across versions