aws-controllers-k8s / community

AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes
https://aws-controllers-k8s.github.io/community/
Apache License 2.0
2.43k stars 256 forks source link

RDS controller resource sync issue #1831

Open bala151187 opened 1 year ago

bala151187 commented 1 year ago

Describe the bug Using RDS ack version v1.1.2 .. Trying to create RDS backend and the RDS creation is successful and database is available But the FieldExport values are not populated .Which causing the application container failing to start

Kubectl describe DBInstance

Status: Ack Resource Metadata: Arn: arn Owner Account ID: xxxxxxx Region: us-east-1 Conditions: Last Transition Time: 2023-06-15T19:32:24Z Status: False Type: ACK.ResourceSynced Last Transition Time: 2023-06-15T19:32:24Z Message: Late initialization successful Reason: Late initialization successful Status: True Type: ACK.LateInitialized

Logs from RDS controller

2023-06-15T19:19:53.285Z INFO ackrt desired resource state has changed {"account": "xxx", "role": "", "region": "us", "kind": "DBInstance", "namespace": "devops", "name": "dev-devops-demo", "is_adopted": false, "generation": 2, "diff": [{"Path":{"Parts":["Spec","CACertificateIdentifier"]},"A":null,"B":"rds-ca-2019"},{"Path":{"Parts":["Spec","StorageThroughput"]},"A":null,"B":0}]}

2023-06-15T19:19:54.039Z INFO ackrt updated resource {"account": "xxxx", "role": "", "region": "us", "kind": "DBInstance", "namespace": "devops", "name": "dev-devops-demo", "is_adopted": false, "generation": 2}

Steps to reproduce The CRD is updated $ kubectl get crds | grep dbinstances

dbinstances.rds.services.k8s.aws 2022-12-16T17:52:32Z

But the RDS controller version v1.1.2 was updated a month ago But the CRD is not updated bcoz of helm3 limitation

Expected outcome I was expecting these logs from RDS controller

2023-06-16T13:11:28.070Z INFO exporter.field-export-reconciler patched target config map

I know Just bcoz the resource sync status FALSE But wat it really made to NOT sync bcoz of oudated CRD

I used this repo to download the CRDS. Then ran below command to fix this issue . This updated the CRD

kubectl patch crd dbinstances.rds.services.k8s.aws --patch-file rds.services.k8s.aws_dbinstances.yaml

I got in touch with #aws-controllers-k8s channel on the Kubernetes Slack community

Heard from ppl that the CRD is backward compatible But it's not . Bcoz the OLD crd was missing some SPEC information like caCertificateIdentifier & StorageThroughPut

So which means do i have to fix using the patch command . Please advise

Environment

aberle commented 1 year ago

I am seeing the same issue with version 1.1.5. I installed version 0.0.27 and I see the ConfigMap getting correctly patched. I followed the steps in this tutorial exactly and it only worked with version 0.0.27. I haven't tried other versions yet, I just picked 0.0.27 since that's the version that the linked tutorial mentions.

a-hilaly commented 1 year ago

/cc @aws-controllers-k8s/rds-maintainer

michael-cr41g commented 1 year ago

Patching dbinstances crd with this file resolved this for me.

Followed same tutorial as @aberle and was also observing FieldExport failing to add keys to a ConfigMap using values from a MariaDb instance.

Environment

I originally installed ack-rds-controller: 0.0.27, created some dbinstance and fieldexport resources, and thought I'd cleaned everything up before uninstalling the controller and installing version 1.1.5. This morning I found I didn't delete some resources. After deleting them, I patched the dbinstances crd using the command below and now fieldexport is working:

kubectl patch crd dbinstances.rds.services.k8s.aws --patch-file rds.services.k8s.aws_dbinstances.yaml

I didn't try patching the crd before deleting those resources, but I'm wondering of those still experiencing the problem if resources from previously installed controllers is blocking their crd patches?

ack-bot commented 9 months ago

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot commented 3 months ago

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot commented 1 month ago

Stale issues rot after 60d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 60d of inactivity. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle rotten