Closed pwittrock closed 3 years ago
Not possible with code as is :(
Could enrich the configuration of the current label applier to accept more rules, or write a new label transformer plugin and use that instead of the commonLabels field.
If it is not possible to have common labels not apply to the selectors, then you should consider removing support for the common labels feature. Its a trap feature which is almost certain to cause users trouble. In the current state is encourages you to NEVER change your labels and is very anti-ci.
Either that, or change the name of commonLabels
to commonLabelsAndSelectors
. Because I don't think anyone who's worked on kubernetes for a while to expect it to be added to selectors with that name. Adding to selectors is bad, especially because you're expected to have continuity(there's a reason these are seperate fields rather then just one and the same).
This behaviour should - at the very least - be clearly documented because it's going to leave you with orphaned resources in case you update your commonLabels
, e.g. pods spawned by a previous iteration of a DaemonSet will continue running after the label matching has been altered by updating commonLabels
. (edit: it is documented that commonLabels
will apply to selectors, but maybe we want a warning there becase most people won't see the consequences).
In my humble opinion, commonLabels
should not propagate to templates or selectors at all. I guess changing its behaviour is far too late and would break kustomize for a lot of people, but what about introducing a new commonUnpropagatedLabels
(please chose a different name, I'm just terrible at naming things) that will ignore templates and selectors?
Well it will either cause non spawning pods, OR if your version of kubernetes is new enough, it will simply fail as selectors have become immutable in newer versions.
I DO have fear that commonUnpropagatedLabels could become RealMysqlEscapeString and I think that's the bigger risk. These sort of "The REAL thing you want is the thing with the different name because we didn't want to break backwards compatibility" is a pretty big gross problem in a lot of projects, and I think can be a big long term concern.
Because this issue causes failures, marking this as a bug.
/sig cli /kind bug
@Benjamintf1 @embik thanks for the great commentary.
The upside to propagation of labels to selectors is convenience and accuracy. In directory foo
, the directive commonLabel foo
will associate a Service in that directory with all pods created in that directory with no typo mistakes. Same in directory bar
, etc. Some encapsulating overlay kustomization can combine foo
and bar
by mentioning them as bases, and add a wider-scoped label (which also propagates to selectors).
One doesn't have to change labels - one can just add more, leaving the old labels in place, and proceed with some label deprecation scheme across the cluster as time passes.
In favor of
commonLabelsAndSelectors
would be clearer).but would like more evidence that automatic propagation to selectors is a poor choice for default behavior.
There is also a possible usecase of having the same thing deployed twice using different labels, but you'd need more then that to make it functional (Including deploying to a different namespace, but also you might have some troubles with cluster resources, so basically you'd have to fuzz the name, or location of everything to get it to work).
I'm not aware of a solid way to deprecate selectors at all at the moment. The best solution one Could do something with --prune these days (I think, looks like a new feature), but even then, you'd want that as a common label that doesn't apply to the selectors.
Part of the reason I think propagating to selectors is a bad choice is because it isn't clear. In my personal opinion, you should use a absolutely minimal set of labels for your selectors, and have additional labels for querying and understanding (arguably some of them could be annotations for performance reasons, but I havn't seen that be a problem in my experience), because of the absolute need for continuity between earlier and later applied files.
I think there IS a usecase for both commonLabels and commonSelectors(Or you could call this "Common Distinguishers", or some other derived term), but I don't think these options should be combined in the same configuration. I think there's more clarity in having them as seperate paths because they are entirely different usecases and goals.
@monopole
One doesn't have to change labels - one can just add more, leaving the old labels in place, and proceed with some label deprecation scheme across the cluster as time passes.
I might be wrong here, but won't exactly that result in orphaned pods? Because adding additional commonLabels
will append them to your selector, which means pods spawned by the previous iteration of your deployment/daemonset won't be selected by your controller object anymore (the existing pods miss the added label and thus are not selected).
I'll throw together a short demonstration of that if I find the time.
the tldr of https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates is make a plan (the 'label deprecation scheme' i alluded to above), and understand that changing workload selectors really means creating new workloads and deprecating old ones, and having to delete them on your own. kustomize doesn't create this problem.
How about a new term kustomization term commonSelectedLabels
that does what commonLabels
does now, and change the behavior of commonLabels
to behave as commonAnnotations
does, i.e. just add KV pairs to metadata, and don't mess with selectors.
This breaks existing behaviors, so we'd have to increment the version in the kustomization file (which is a pain, since that adds text to all kustomization files that is currently optional since there's only one version).
Any other ideas?
I came across this issue while trying to add a revision-based label to all pods in a Kustomization and then use it as a part of confirming that all Pods of that version were ready later. In this case, I feel like I do want this label to be propagated down into workload templates, but don't want it to be propagated to selectors.
Any other ideas?
This desired additional dimension of customizability and the above commentary about breaking current functionality make me wonder if it might be worth considering introducing a new commonLabelOptions
term that could be used as follows:
commonLabels:
existingLabel: something
# generated by CI tooling via `kustomize edit add label`
revision: decaf123
commonLabelOptions:
defaults:
propagateToSpecTemplate: true
propagateToSelectors: false
labels:
existingLabel:
propagateToSpecTemplate: true
propagateToSelectors: true
I do agree that propagation to selectors should probably be off by default, but this approach might support deferring changing the apiVersion for a while if that's desired.
If the tact is to go down the path of being customized off the same "root" option, why not have
commonLabels:
existingLabel:
value: something
propagateToSpecTemplate: true
propagateToSelectors: false
It keeps the locality of the options a bit better, especially if you scale the number of labels you want.
(You can also keep some degree of backwards compatibility with this by treating a single string as the old way and eventually deprecating it, but having added labels be added in the new way)
as I explained in thread https://github.com/kubernetes-sigs/kustomize/issues/157 propagating labels to selectors is actually wrong thinking, a "trap" as Benjamin put it.
The commonLabels are meant to "tag" those resources that YOU create with your kustomize setup. The selectors constitute an ID that points to a resource that cannot be assumed to be under your auspieces. Making this assumption is completely arbitrary.
As a mind-aid, consider the case that you want to register stored procedures with a database.
kind: storedProcedure version: ... spec: SELECT * FROM some_table
if this manifest was under the control of kustomize and you had defined commonLabels, then this SQL statement itself is YOUR resource and you might expect that when it gets registered with a database, that it would get tagged with your commonLabels. However, those commonLabels should never modify the name of the database table "some_table"/ How strange it would be if someone proposed that this should happen. To rename commonLabels into commonLabelsAndSelectors is besides the point, which is that you would never want to rewrite selectors because this would falsify an identity. Saying "yeah but some_table might also have been created by me" is a possible usecase, but there is no reason for why this usecase should get any preference over the opposite case which is that the resource has NOT been created by you or is not part of your set of kustomization setups.
A sensible default would therefore be to not temper with selectors. The case for adding labels to selectors would be to thereby give them another identity. One can allow for this special case, but then one should make it very explicit and not confuse it with labels which are meant to tag YOUR resources and are not a templating facility for modifying the identity of foreign keys.
If the tact is to go down the path of being customized off the same "root" option, why not have
commonLabels: existingLabel: value: something propagateToSpecTemplate: true propagateToSelectors: false
It keeps the locality of the options a bit better, especially if you scale the number of labels you want.
(You can also keep some degree of backwards compatibility with this by treating a single string as the old way and eventually deprecating it, but having added labels be added in the new way)
With this approach it is not obvious how to change the defaults.
With this approach it is not obvious how to change the defaults.
You can presumably leave off the "defaulting" options if you have them separate. You'd presumably learn it the same way, via documentation.
If I understood correctly, the reason the selectors are modified is because changing the label in a resource will change it's identity in case it is used by a selector.
Implementing a new approach will have the same problem,. because you might change a label expected by the selector that didn't change.
The selector is updated do add the new labels where in fact it should only change the existing ones that where changed.
IMHO this shouldn't be a breaking change, is more like a bug fix, because nobody was expecting this dodgy behaviour.
A possible solution here may be to introduce the concept of commonSelectors
that, if present, would be populated into the selectors instead of the commonLabels
. For example:
commonLabels:
app: foo
environment: dev
commonSelectors:
app: foo
release: bar
Would generate selectors of app: foo
and release: bar
but not environment: dev
, while the labels would be app: foo
and environment: dev
. This would allow for more flexibility.
If commonSelectors
is absent, then commonLabels
could continue to work as it does currently. This would allow the expected behaviour for current users to remain the same, while giving people an opportunity to do other things if needed.
However, this doesn't address the problem that selectors, by design, are usually meant to be more directed. As such, this should probably be extended to allow the generation of more specific selectors in specific objects, or to allow the exclusion of the generation from specified objects.
Looks like it's already available. https://kubectl.docs.kubernetes.io/pages/reference/kustomize.html?q=#meta-options
Hello, I've been facing the exact same problem of putting the version in commonLabels.
@darwin67 you're right but this can be quite a problem if you have tens of microservices. The personalized configuration file for kustomize (the one your link is pointing to) should be present in ALL the folders.
Something like @taha-au suggested is probably a good idea. Or the possibility to specify the paths (metadata/labels, spec/selector/, ...) directly in the kustomization.yaml file.
---
commonLabels:
paths:
- kind: Deployment
paths: [ "metadata/labels", "spec/selector" ]
- kind: Service
paths: ["metadata/labels"]
labels:
app: foo
environment: dev
version: 1.0.0
I'm not sure what you mean by ALL
though.
We kind of have similar situations but we structure it in a way that we only put the common configurations in each base so we always just reference the base from the overlays and the overlays can go a little crazy with multiple versions of the same base, so let's say I have 10 bases and I want 3 versions for each of the app, the max amount of common configurations I need is only 10.
Maybe the configurations
setting can be improved but me as an everyday user is actually kind of happy the way it is right now and the reason is because we use Istio and having the version info propagated actually helps with dynamic traffic routing, like mirroring and A/B testing.
I get the idea that people want to set everything in one place but I thought the whole idea of this tool is about composition, so centralizing the configurations in one place doesn't really defeat the purpose but I feel like it's moving away from that idea. Just my 2 cents.
I guess it depends on how your services are setup. In my case we have a setup that looks something like this, with a repo/monorepo:
micro-service-1/
-----------------base/
-----------------develop/
-----------------staging/
...
micro-service-X/
-----------------base/
-----------------develop/
...
My point on my previous message being that if the whole company (or certain teams in charge of multiple micro-services) use the same "custom" kustomize configuration, we would have to "copy" the kustomize configuration in each micro-service-#/base folder. I totally agree on the tool being about composition, but in a certain limit i guess. Maybe our architecture/use of overlays is not optimal, just relating about how we use it.
I think the only way to do this currently is to publish a base that has this custom configuration and have all the micro-services use the same base. However only github is supported for remote bases.
Maybe having git
supported for remote bases could help?
Like how you can reference a remote terraform module with the git reference, like tags or even the sha.
Another reason why labels should not be applied on selectors by default: defining NetworkPolicy that allows ingress to application's web pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-access
spec:
podSelector:
matchLabels:
app.kubernetes.io/component: web
app.kubernetes.io/name: web-app
policyTypes:
- Ingress
...
Suddenly some labels get applied to podSelector from commonLabels, making it possible for any pod to communicate with eg. database that happens to have that common label (such as client id or project name in our case).
I would flag this as potential security issue.
CommonLabels not updating selectors would be really annoying. Wouldn't adding a labels
transformer where you could specify a target (maybe reusing the syntax of jsonPatch target) solve the issue?
Like in
labels:
- target:
group: apps
version: v1
kind: Deployment
name: test
labels:
version: v0.1.0
Or follow whatever the new syntax for multi target patches will be.
Ah, since commonLabels
directive adds labels to spec.selector.matchLabels
, I have to add another patch to remove it.
- op: remove
path: /spec/selector/matchLabels/<mylabel>
- op: remove
path: /spec/template/metadata/labels/<mylabel>
And in kustomization.yaml
patchesJson6902:
# this is hacky because of https://github.com/kubernetes-sigs/kustomize/issues/1009
- target:
group: apps
version: v1
kind: Deployment
name: git-apply
path: patches/rm-deploy-common-label-in-selector.yaml
Honestly, I don't like the behavior to apply common labels to spec.selector
at all, it makes the adoption harder(manage an existing deployment via kustomize) since those fields are immutable which is very annoying.
We just hit this unexpected behavior in kustomize, trying to add a version label to use in istio routing. Having it added to the selector is very problematic now that it is an immutable field:
[]v1.LabelSelectorRequirement(nil)}: field is immutable
I hit this problem today, trying to bring an existing DaemonSet under a new kustomization root. I can't use commonLabels
to add a new label to the DaemonSet, because it tries to mutate the DaemonSet's label selector, which is immutable.
Looks like it's already available. https://kubectl.docs.kubernetes.io/pages/reference/kustomize.html?q=#meta-options
@darwin67, when I tried to override the configuration for the "spec/selector/matchLabels" path in DaemonSet like this,
commonLabels:
- kind: DaemonSet
path: spec/selector/matchLabels
create: false
kustomize rejects it, complaining as follows:
Error: conflicting fieldspecs
That tells me that it's not possible to override the built-in rules.
This is very annoying, since the label selectors are immutable. It makes very hard to migrate a current setup.
Hi, just getting back to add another very annoying issue.
The labels are applied also to the service selector, this means that until some new pods created from the new deployment are running and healthy, the new service does not have any end points! Istio was very unhappy when we've tried this.
It makes the migration of a production setup practically impossible without any downtime.
We are adding two patches, one for the deployment and the other one for the services, to remove from the selectors the labels we do not want to be added in order to handle the transition, like @zhouhaibing089 is doing.
I am also very much in favor of not applying commonLabels
to selectors by default for the reasons mentioned above. Especially after the selectors have become immutable in current Kubernetes Deployments and DaemonSets.
As @taha-au mentioned, splitting label and selectors would allow much better control.
The following approach would also allow to easily define defaults, similar to what already exists via the configurations:
commonLabels: &labels
app.kubernetes.io/version: v1.2
app.kubernetes.io/name: my-app
commonSelectors:
app.kubernetes.io/name:
# [explicit] set labels at spec.selector for v1.Service types
- kind: Service
version: v1
path: spec/selector
create: true
# [with defaults] set labels at spec.selector.matchLabels for Deployment types
- kind: Deployment
This would allow much finer control over which labels get applied where.
One could also mostly re-use the current configurations as default values for commonSelectors
.
However, this could also be used for other applications, not just for matchLabels
/ selector
. One could basically apply these mutations to all sorts of things.
And this is where we come back to plain old patches. Given that we already have a mechanism to apply patches to resources, why not simply re-use this mechanism and drop the restriction on having to specify a name? (#1568) Instead one could use a selector mechanism similar to what Kubernetes does - based on labels, annotations, kinds, versions, names, whatever...
Maybe even allowing to define a patch inside kustomization.yml
instead of having to have it in a separate file, referenced via path:
. This would allow users to make use of YAML anchors to avoid some repetition.
This would give maximum control of how and where these matchLabels
and selectors
are applied, would involve much less "magic" and re-use functionality that is already there anyway.
The nice thing with commonLabels is that it's dead simple. The patches syntax is nice but more complex. Why not just whitelisting labels for selectors as in:
commonLabels:
app.kubernetes.io/version: v1.2
app.kubernetes.io/name: my-app
selectorLabels:
- app.kubernetes.io/name
Then if there are different services that have different selectors, maybe they deserve their own kustomization.yml file?
with this, you would still not solve the problem of having them applied to sections where you do not want them applied to. @sascha-coenen explained the issue of inheritance above.
Making it explicit avoids these cases, as you yourself state where you want which label applied to. This also avoids unexpected behavior, as we currently see with orphaned resources on older Kubernetes or failed deployments on newer Kubernetes due to the selectors changing.
Doing an overdue bug sweep.
This issue could have been closed when plugins were released, because they provide support for common labels that don't get set on selectors.
This support isn't provided by changing the behavior of the commonLabels
field, or by introducing a new labels-oriented kustomization field alongside commonLabels
. It's supported by using the existing transformers
field to directly configure the (builtin) LabelTransformer
plugin.
An example to follow is here:
A new issue issue could be opened to request a new kustomization field, if some agreement can be established on how it would behave. Behind the scenes that hypothetical new field would just be running the LabelTransformer
with a particular hardcoded config.
@monopole This link doesn't exist: https://github.com/kubernetes-sigs/kustomize/blob/master/api/internal/target/customconfigreusable_test.go
@monopole do you have an example to setup labels on metadata, without add them on selector ? Thanks.
@digi691 it moved: https://github.com/kubernetes-sigs/kustomize/blob/master/api/krusty/customconfigreusable_test.go
These tests are written as high level examples, so I felt it misleading and off-putting to have them buried in an internal
package. So I moved them up to the api's main entry point. Sorry about the dead link.
@nlamirault see previous comment
Thanks. Another example here: https://github.com/viadee/kustomize-examples/blob/master/overlays/custom-metadata-labels/metadataLabelTransformer.yaml
From #330
@monopole Can I then use kustomize edit set label version=v0.4.2
with the LabelTransformer? Do you have a link to the 'new kustomization field' that would let us update both the image version and corresponding version as a single atomic command?
Ah, since
commonLabels
directive adds labels tospec.selector.matchLabels
, I have to add another patch to remove it.- op: remove path: /spec/selector/matchLabels/<mylabel> - op: remove path: /spec/template/metadata/labels/<mylabel>
And in kustomization.yaml
patchesJson6902: # this is hacky because of https://github.com/kubernetes-sigs/kustomize/issues/1009 - target: group: apps version: v1 kind: Deployment name: git-apply path: patches/rm-deploy-common-label-in-selector.yaml
Honestly, I don't like the behavior to apply common labels to
spec.selector
at all, it makes the adoption harder(manage an existing deployment via kustomize) since those fields are immutable which is very annoying.
@zhouhaibing089 How did you get this to work?
I am using the Prometheus Operator and commonLabels
breaks it. The service gets the selector but the operator doesn't add this label to the pods so the service can't find any of the pods. Very frustrating.
I can't seem to add a patch to remove the selector either:
❯ kustomize build overlays/cicd
Error: accumulating resources: recursed accumulation of path '/Users/jason/source/observability/monitoring-stack/overlays/cicd/monitoring': failed to apply json patch '[{"op":"remove","path":"/spec/selector/variant"}]': error in remove for path: '/spec/selector/variant': Unable to remove nonexistent key: variant: missing value
Even though the Prometheus service does get it:
apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
variant: cicd
name: prometheus-k8s
namespace: monitoring
spec:
ports:
- name: web
port: 9090
targetPort: web
selector:
app: prometheus
prometheus: k8s
variant: cicd
sessionAffinity: ClientIP
It seems annoyingly that the patch is being applied before the commonLabels
even though this is not the case for other kustomize fields such as namespace
which does get applied before the patch.
@steinybot: My structure looks like this:
base/
kustomization.yaml
deployment.yaml
overlay/
foo/
kustomization.yaml
patches.yaml
In this example, I have commonLabels
specified in overlay/foo/kustomization.yaml
, and my patch is also specified in overlay/foo/patches.yaml
.
Could you share your structure a little bit more? It seems your patch is not specified in overlay.
I forget exactly what I tried. I ended up just embracing this weirdness and found the setting in prometheus-operators that allows me to add labels to the pods it creates (podMetadata
from https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec).
Ah, since
commonLabels
directive adds labels tospec.selector.matchLabels
, I have to add another patch to remove it.- op: remove path: /spec/selector/matchLabels/<mylabel> - op: remove path: /spec/template/metadata/labels/<mylabel>
And in kustomization.yaml
patchesJson6902: # this is hacky because of https://github.com/kubernetes-sigs/kustomize/issues/1009 - target: group: apps version: v1 kind: Deployment name: git-apply path: patches/rm-deploy-common-label-in-selector.yaml
Honestly, I don't like the behavior to apply common labels to
spec.selector
at all, it makes the adoption harder(manage an existing deployment via kustomize) since those fields are immutable which is very annoying.
Only this method worked for me on kubectl:
kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
metadataLabelTransformer wasn't supported on kubectl v1.18.2.
Hi all
I face a similar issue when I try to update the "version" label in the CI/CD pipeline. I execute
cd base
kustomize edit add label version:xyz123
kustomize build ../overlays/production | kubectl apply -f -
And receive the following error:
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Has anyone found a viable workaround to update the version label during deployment?
Bringing this back to life as it's still being requested and we may have someone to work on it.
Summary:
The commonLabels
directive tells the LabelTransformer
to add name:value pairs to fields specified in its default config.
The convenience here is that one doesn't have to specify the fields that should get labels. The price paid is lack of control, e.g. sometimes getting unwanted labels in selector fields.
The default configuration can be overridden by declaring a transformers:
field in the kustomization file referring to the LabelTransformer
.
LabelTransformer
is told to label
only
LabelTransformer
that only
adds pager: 867-5301
to the metadata/labels
field of a Deployment
and nothing else.We should not change the schema of the commonLabels
field. It would create too much confusion. Technically we could do so if we incremented the version number in the kustomization struct/file, but IMO, it's not worth it.
Some ideas from above
commonLabels
.Suggestions
labels
.commonLabels
field should not be allowed to coexist in one kustomization file (avoid ordering confusion)
kustomize fix
command should read in a kustomization file with commonLabels
and write out the equivalent using the new field, because the new field would be a superset of capability.LabelTransformer
directly as in the examples above (similar to how the patches
field has lots of flexibility, but is still simpler than using the PatchTransformer
directly).LabelTransformer
(as opposed to requiring a new transformer with a new configuration format).The simplest thing might be just make it a list of label transformer plugin configs, e.g.
labels:
- pairs:
bird: eagle
fields:
- path: spec/selector/matchLabels
kind: Whatever
- pairs:
fruit: apple
truck: f150
fields:
- kind: Deployment
- kind: Service
Implicit here would be that metadata/labels
should always get the label, so that particular path doesn't need to be specified here.
It has to be simpler than using the raw LabelTransformer
config, because that's the whole point of these abbreviations.
@Shell32-Natsu wdyt?
As I've stated in my dupe issue, I think the best solution for this would be to have two separate properties in your Kustomization file, one that will add Labels just as labels, and one that will add them as labels and selector labels.
I think the cleanest solution for this would be this;
ommonLabels:
app.kubernetes.io/owner: ATeam
selectorLabels:
app.kubernetes.io/name: my-app
Although I do understand that this will introduce breaking changes in current deployments and will require manual interaction with resources, I still think this would be the most clear to users of Kustomize.
@monopole LGTM, will do it when I have time.
certain labels such as version shouldn't be propagated to selectors, but are common to the entire app.