aws / karpenter-provider-aws

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
https://karpenter.sh
Apache License 2.0
6.6k stars 919 forks source link

v1beta1 resources are still there after v1 migration #6841

Open timohirt opened 3 weeks ago

timohirt commented 3 weeks ago

Description

Observed Behavior:

I updated from 0.36.1 to 0.37.1 to 1.0.0. Which worked fine. Then I migrated the manifest of NodePool and Ec2NodeClass to v1 schema.

When I get the resources in the cluster, I see both versions exist at the same time. Here an example for NodePool.

kubectl get nodepools.v1.karpenter.sh
NAME      NODECLASS   NODES   READY   AGE
default   default     0       True    7m2s

and here for v1beta1:

kubectl get nodepools.v1beta1.karpenter.sh
NAME      NODECLASS
default   default

Expected Behavior:

I tried deleting v1beta1 resources, which deletes the v1 resource as well. So, how can I get rid of the old version?

Reproduction Steps (Please include YAML):

  1. You are using Karpenter version 0.37.1
  2. Deploy an EC2NodeClass version v1beta1
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
  name: default
spec:
  amiFamily: AL2023
  blockDeviceMappings:
  - deviceName: /dev/xvdu
    ebs:
      deleteOnTermination: true
      iops: 30000
      throughput: 125
      volumeSize: 50Gi
      volumeType: gp3
  metadataOptions:
    httpEndpoint: enabled
    httpProtocolIPv6: disabled
    httpPutResponseHopLimit: 2
    httpTokens: required
  role: karpenter-instance-role
  securityGroupSelectorTerms:
  - id: sg-xxxxxxxx
  subnetSelectorTerms:
  - id: subnet-xxxxxx
  - id: subnet-0xxxxxx
  tags:
    application-id: dev-0
    eks:cluster-name: dev-0
  1. Update Karpenter to version 1.0.0
  2. Update EC2NodeClass to schema version v1 and deploy.
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  amiFamily: AL2023
  amiSelectorTerms:
  - alias: al2023@latest
  blockDeviceMappings:
  - deviceName: /dev/xvdu
    ebs:
      deleteOnTermination: true
      iops: 30000
      throughput: 125
      volumeSize: 50Gi
      volumeType: gp3
  metadataOptions:
    httpEndpoint: enabled
    httpProtocolIPv6: disabled
    httpPutResponseHopLimit: 2
    httpTokens: required
  role: karpenter-instance-role
  securityGroupSelectorTerms:
  - id: sg-xxxxxxxx
  subnetSelectorTerms:
  - id: subnet-xxxxxx
  - id: subnet-0xxxxxx
  tags:
    application-id: dev-0
    eks:cluster-name: dev-0

Versions:

jmdeal commented 3 weeks ago

This is expected behavior. Karpenter 1.0.0 includes conversion webhooks which enable users to use both the v1beta1 and v1 versions of the CRDs. The v1 version is the storage version, but you can still convert to the v1beta1 version (which is what happens when you do a kubectl get specifying the v1beta1 version). Importantly, these are the same underlying resources which is why deleting one appears to delete both.

timohirt commented 3 weeks ago

Thank you for the answer. So, with a later release they are going to be deleted automatically?

jmdeal commented 2 weeks ago

Not exactly deleted, but we'll be dropping the conversion webhooks and the v1beta1 version from the vended CRD at v1.1.0. The upgrade process to v1.1.0 will include a step to ensure all stored data is on the latest storage version before upgrading the CRD, probably with a tool like kubernetes-sig/kube-storage-version-migrator.

github-actions[bot] commented 5 days ago

This issue has been inactive for 14 days. StaleBot will close this stale issue after 14 more days of inactivity.