aws-controllers-k8s / community

AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes
https://aws-controllers-k8s.github.io/community/
Apache License 2.0
2.39k stars 253 forks source link

Add ability to adopt a pre-existing resource via an annotation rather than using an AdoptedResource #1965

Open klagroix opened 9 months ago

klagroix commented 9 months ago

Is your feature request related to a problem? Currently our EKS upgrade strategy is blue/green where we provision a new cluster and migrate workloads over to the new cluster. For most applications, this is as simple as just running the same application (defined as internal helm charts) on both clusters temporarily before uninstalling the chart from the old cluster.

If we want to start tracking AWS resources alongside our applications, this gets complicated. By default, when we remove the application on the old cluster, the AWS resource (ex: S3 bucket) will get removed. I can set the retain policy annotation on the Bucket object but when the Bucket object is saved on the new cluster, ACK won’t recognize it as the Bucket wasn’t created on this cluster and it’s not an AdoptedResource. Status shows:

    - lastTransitionTime: '2023-12-14T18:04:08Z'
      message: Resource already exists
      reason: >-
        This resource already exists but is not managed by ACK. To bring the
        resource under ACK management, you should explicitly adopt the resource
        by creating a services.k8s.aws/AdoptedResource
      status: 'True'
      type: ACK.Terminal

It seems like we’d need to support both Bucket and AdoptedResource manifests and apply them differently based on whether this is the first time we’ve deployed the application or whether we’re just migrating this to a new cluster.

Describe the solution you'd like I'd like to be able to support adoption of existing resources via annotations. Example: services.aws.k8s/adopt-resource: true. This would allow for the same manifest to be used for creation and adoption of resources.

(note: I didn't come up with this idea myself, it was provided as a suggestion in the community slack here: https://kubernetes.slack.com/archives/C0402D8JJS1/p1702579196939319)

Describe alternatives you've considered We've considered using blue/green AWS services to coincide with the EKS migration however this isn't practical for all situations (ex: need long term storage on S3, have persistent data in RDS, etc).

ack-bot commented 2 months ago

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

klagroix commented 2 months ago

/remove-lifecycle stale