Open Fizzadar opened 4 years ago
Proposal:
conditions:
modes: [dev, test, deploy] # (ktd up | ktd test | kubetools deploy)
clusters:
not: []
match: ['*']
namespaces:
not: []
match: ['*']
Essentially just a more explicit version of the current ones.
clusters
is not consistent with env
and context
, as in, kubetools
CLI expectsdeploy
is the only mode where clusters
and namespaces
apply which makes it a bit asymmetricaldeploy
case: One might want to specify "(cluster production AND namespace A) OR (cluster backup AND namespace B)" (as in, for example, NOT namespace B in cluster production, NOT namespace A in cluster backup, NOT namespace C in any cluster). This could be achieved with a list of clusters
+ namespaces
pairs* `clusters` is not consistent with `env` and `context`, as in, `kubetools` CLI expects
Yeah this is a weird one really, context
is a (cluster, user)
tuple, but only the cluster bit of that is any concern to the app, the user bit really only relates to the place kuebtools
is being executed from.
* I think `deploy` is the only mode where `clusters` and `namespaces` apply which makes it a bit asymmetrical
You're absolutely right, doesn't quite work!
* More flexibility might be needed in the `deploy` case: One might want to specify "(cluster production AND namespace A) OR (cluster backup AND namespace B)" (as in, for example, NOT namespace B in cluster production, NOT namespace A in cluster backup, NOT namespace C in any cluster). This could be achieved with a list of `clusters` + `namespaces` pairs
Interesting! So maybe we need to focus the conditions around the 'mode'. So, as default (everywhere/always):
conditions:
dev: true
test: true
deploy: true
Then limiting to clusters/namespaces could be:
conditions:
deploy:
- ['production', 'A']
- ['backup', 'B']
And finally the case where we want to deploy a dependency in all namespaces on the staging cluster but not elsewhere:
conditions:
deploy:
- ['staging', '*']
This then removes the need for "not" matching because (I think?) everything can be achieved using the tuples with wildcard support.
One question - is defaulting everything to true
or false
better? Leaning towards true
as this is more common, and then conditions can be used to explicitly disable certain things in certain places.
Regarding backwards compatibility: it will be trivial to detect the old version condition
object by the keys within, so we can keep the old style implementation in tandem.
One question - is defaulting everything to true or false better? Leaning towards true as this is more common, and then conditions can be used to explicitly disable certain things in certain places.
Definitely true
. When reading a kubetools.yml
you should expect that:
unless otherwise specified explicitely
Instead of:
conditions: deploy: - ['production', 'A'] - ['backup', 'B']
I'd rather go explicit:
conditions:
deploy:
- env: 'production'
namespace: 'A'
- env: 'backup'
namespace: 'B'
Which opens up to negations and wildcards nicely:
conditions:
deploy:
# namespace B in any environment but production
- not_env: 'production'
namespace: 'B'
# Any namespace in backup environment
- env: 'backup' # Absence of `namespace` means `'*'`
And also opens backwards-compatible evolutions if later adding other conditions than env
and namespace
I like that a lot, definitely pro the explicitness!
One thing - we should sort out this mess of env
vs cluster
vs context
. IMO we should go with cluster
, as that's most appropriate (context is merely something the client uses to speak with a cluster). In tandem I also think we should populate KUBE_CLUSTER
based on the context cluster name when deploying (rather than just passing the context name as-is, because they might be different).
Also, a thought for differentiating this from old configs:
name: my-project
configVersion: 1
requireKubetools: '~=12.0' # optional, replacement for minKubetoolsVersion
One thing - we should sort out this mess of env vs cluster vs context. IMO we should go with cluster, as that's most appropriate (context is merely something the client uses to speak with a cluster).
As far as kubetools
references kubectl
's context
, it should stick with context
. I'm not sure but I can imagine an end-user might have separate context
s defined (in .kube/config
) to talk with the same cluster
(maybe to represent separate "envs" :laughing:)
In tandem I also think we should populate KUBE_CLUSTER based on the context cluster name when deploying (rather than just passing the context name as-is, because they might be different).
I think more is better in this case, but with consistency with .kube/config
:
KUBE_CLUSTER
the corresponding cluster
KUBE_CONTEXT
the corresponding context
KUBE_NAMESPACE
the corresponding namespace
Now this poses a few issues:
context
only makes sense from the deployment tooling side. Once the deployment is done, this notion doesn't exist any more in the k8s cluster.cluster
depending whether we're talking about the cluster name
from .kube/config
or a canonical name of the k8s cluster. My knowledge is short here for whether there is such a thing as the name of the cluster. If there is, then it would be best to use that.Answers to these questions will define to what extent multiple users of a same cluster can customise their .kube/config
, i.e. it will define which values must be the same for all users.
I'm almost sure that context only makes sense from the deployment tooling side. Once the deployment is done, this notion doesn't exist any more in the k8s cluster.
Spot on, context
is really only a thing on the deployment (user/CI) side.
The same might apply to cluster depending whether we're talking about the cluster name from .kube/config or a canonical name of the k8s cluster. My knowledge is short here for whether there is such a thing as the name of the cluster. If there is, then it would be best to use that.
There isn't really any concept of a "cluster name" - https://stackoverflow.com/questions/38242062/how-to-get-kubernetes-cluster-name-from-k8s-api. So this all ends up being entirely dependent on the deployment environment, which means it's flexible but... complex!
Agree w/adding all three envvars, gives all the information to the underlying app and maximum flexibility! As for the conditions, context
+ namespace
seem most sensible here to me. Ultimately the cluster AND context are up to the end user, and thus it's down to us/jenkins/wherever to enforce the correct values.
Current conditions - these can control where deployments/dependencies/upgrades all run/exist:
Ideally this should be less complex and less gotchas between combinations!