Open Desolar1um opened 1 year ago
This issue is currently awaiting triage.
SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted
label.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Also when kustomization
has managedByLabel
(in buildMetadata
) it writes label app.kubernetes.io/managed-by: kustomize-(devel)
which is invalid for k8s resources.
Also when
kustomization
hasmanagedByLabel
(inbuildMetadata
) it writes labelapp.kubernetes.io/managed-by: kustomize-(devel)
which is invalid for k8s resources.
I ran into this issue using docker image bitnami/kubectl, which failed my kustomize-build with this error:
Error from server (Invalid): error when creating "STDIN": ConfigMap "<redacted>" is invalid: metadata.labels: Invalid value: "kustomize-(devel)": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')
I worked around it by downgrading to bitnami/kubectl:1.25.15
.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I've also hit this issue on 1.29.
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "29",
"gitVersion": "v1.29.0",
"gitCommit": "3f7a50f38688eb332e2a1b013678c6435d539ae6",
"gitTreeState": "clean",
"buildDate": "2023-12-13T08:51:44Z",
"goVersion": "go1.21.5",
"compiler": "gc",
"platform": "darwin/arm64"
},
"kustomizeVersion": "v5.0.4-0.20230601165947-6ce0bf390ce3"
}
This appears to have started becoming a problem here https://github.com/kubernetes/kubectl/commit/90ca180ce06151e5bd8ff1e73756ed3d5e03f069 so affects 1.28 onwards.
The commit on kustomize this is pointing to is https://github.com/kubernetes-sigs/kustomize/commit/6ce0bf390ce3
I don't know enough about the build process, seems this was some sort of automated commit (just titled "vendor" and I can't find an associated PR, perhaps @Jefftree can shed more light on it as looks like the commit on both kustomize and kubectl came from them?).
Running a go mod graph | grep sigs.k8s.io/kustomize/kustomize/v5
it appears there are no transitive dependencies on kustomize, perhaps this can be updated to 5.1.0 (or later? latest kustomize is 5.4.1)?
/remove-lifecycle rotten
Hi I also face the same issue app.kubernetes.io/managed-by: kustomize-(devel) is there any solution for that?
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0-eks-036c24b
thanks
Please see this KEP https://github.com/kubernetes/enhancements/issues/4706 and this issue will no longer valid in the next releases.
Yes, Someone who doesn't relate to kustomize maintainer updated kustomize pkg in kubectl using non release tag.
https://github.com/kubernetes/kubernetes/pull/118384 https://github.com/kubernetes/kubernetes/pull/118384/files#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R244-R246
The current master branch is fixed to use the kustomize/v5.4.2
version.
https://github.com/kubernetes/kubernetes/pull/123339/files#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R222-R224
The issue still persists with v1.30.2:
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1+k3s1
Is there any information on when and what number the next release will be that fixes it? Caused some confusion already ;)
This is actually a valid module version, but could be resolved by updating to a tagged kustomize version as well
k3s has actually fixed it in their distribution of kubectl:
$ k3s kubectl version
Client Version: v1.30.1+k3s1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1+k3s1
The issue still persists with v1.30.3:
Client Version: v1.30.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.3
@gppmilicia Looks like that change will be released at v1.31
What happened:
Non-existent Kustomize version is shown when
kubectl version
is usedWhat you expected to happen:
An existent Kustomize version to be displayed
How to reproduce it (as minimally and precisely as possible):
Install kubectl v1.28.2 run
kubectl version
Anything else we need to know?: running on alpine
Environment:
kubectl version
): v1.28.2, v1.25 respectivelycat /etc/os-release
): alpine linux