Open 0xRIZE opened 7 months ago
@rtroost2012 : Did you had Well known Kubernetes labels integrated with your workloads before 1.16.0 ?
@rtroost2012 : Did you had Well known Kubernetes labels integrated with your workloads before 1.16.0 ?
Our Kubernetes.yml looks something like this. We have not been using well known labels before 1.16.0
---
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
jkube.eclipse.org/git-branch: redacted
jkube.eclipse.org/git-commit: redacted
jkube.eclipse.org/git-url: redacted
jkube.eclipse.org/scm-tag: HEAD
jkube.eclipse.org/scm-url: redacted
labels:
app: spring-boot-admin
group: redacted
owner: ghostbusters
provider: jkube
spring-boot: "true"
version: 0.0.1-SNAPSHOT
....
Looking at field is immutable
error it looks like there is some mismatch between what JKube is trying to apply on Kubernetes cluster and what's already deployed into Kubernetes cluster.
Could you please check what value has changed in selectors in Deployment ?
Adding
<jkube.enricher.jkube-well-known-labels.enabled>false</jkube.enricher.jkube-well-known-labels.enabled>
To our configuration reverts back to the old behaviour and no immutability issues are encounterd.
The kubernetes YAML after upgrading to 1.16 looks like this
---
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
jkube.eclipse.org/git-branch: redacted
jkube.eclipse.org/git-commit: redacted
jkube.eclipse.org/git-url: redacted
jkube.eclipse.org/scm-tag: HEAD
jkube.eclipse.org/scm-url: redacted
labels:
app: spring-boot-admin
app.kubernetes.io/managed-by: jkube
app.kubernetes.io/name: spring-boot-admin
app.kubernetes.io/part-of: redacted
app.kubernetes.io/version: "20240219.880132"
group: redacted
owner: ghostbusters
provider: jkube
spring-boot: "true"
version: "20240219.880132"
[...]
selector:
app: spring-boot-admin
app.kubernetes.io/managed-by: jkube
app.kubernetes.io/name: spring-boot-admin
app.kubernetes.io/part-of: redacted
group: redacted
provider: jkube
Previous selector:
selector:
app: spring-boot-admin
provider: jkube
group: redacted
@rtroost2012 : So your issue seems to be happening on redeploying an already deployed application on K8s Cluster. I see now Deployment selector contains new app.kubernetes.io
keys compared to previous version.
As per Kubernetes Documentation, Deployment's selector field can not be changed once it's created
Note: In API version apps/v1, a Deployment's label selector is immutable after it gets created.
Is it possible in your case to delete previous Deployment and then run k8s:apply
again?
@rtroost2012 : So your issue seems to be happening on redeploying an already deployed application on K8s Cluster. I see now Deployment selector contains new
app.kubernetes.io
keys compared to previous version.As per Kubernetes Documentation, Deployment's selector field can not be changed once it's created
Note: In API version apps/v1, a Deployment's label selector is immutable after it gets created.
Is it possible in your case to delete previous Deployment and then run
k8s:apply
again?
Deleting deployments, especially in production, is not something we were planning on doing straight away as it causes downtime. The workaround of disabling the new behaviour is sufficient for now. I am however puzzled by the fact it was introduced (and enabled) in first place given its potential impact for existing deployments.
I am however puzzled by the fact it was introduced (and enabled) in first place given its potential impact for existing deployments.
Sorry, I did not know about Deployment selector immutable behavior in Kubernetes. We don't have any redeployment scenario testing. I'll check with team on how to handle this scenario.
I am however puzzled by the fact it was introduced (and enabled) in first place given its potential impact for existing deployments.
We did consider adding it as an opt-in feature, but we finally decided to provide the disabling mechanism and a warning notice in the changelog. Not considering the immutability of already existing deployments was a mistake. We'll need to figure this out too when we finally remove the legacy labels.
Describe the bug
Introduced in https://github.com/eclipse/jkube/issues/1700
With a relatively standard JKube configuration, we run into immutability issues after upgrading from v1.15.0 to v1.16.0.
I have limited experience with Kubernetes, but my understanding is that label selectors cannot be changed for existing deployments.
The Deployment "spring-boot-admin" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"spring-boot-admin", "app.kubernetes.io/managed-by":"jkube", "app.kubernetes.io/name":"spring-boot-admin", "app.kubernetes.io/part-of":"[redacted]", "group":"[redacted]", "provider":"jkube"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Eclipse JKube version
1.16.0
Component
Kubernetes Maven Plugin
Apache Maven version
3.8.5
Gradle version
None
Steps to reproduce
JKube Configuration
Expected behavior
Backward-compatibility with old labels or a more explicit warning in the changelog that this might be breaking
Runtime
Kubernetes (vanilla)
Kubernetes API Server version
1.25.3
Environment
Amazon
Eclipse JKube Logs
Sample Reproducer Project
Go to any quickstart project
Additional context
No response