Open bala151187 opened 1 year ago
I am seeing the same issue with version 1.1.5
. I installed version 0.0.27
and I see the ConfigMap getting correctly patched. I followed the steps in this tutorial exactly and it only worked with version 0.0.27
. I haven't tried other versions yet, I just picked 0.0.27
since that's the version that the linked tutorial mentions.
/cc @aws-controllers-k8s/rds-maintainer
Patching dbinstances
crd with this file resolved this for me.
Followed same tutorial as @aberle and was also observing FieldExport
failing to add keys to a ConfigMap
using values from a MariaDb
instance.
Environment
1.27
rds
1.1.5
I originally installed ack-rds-controller: 0.0.27
, created some dbinstance
and fieldexport
resources, and thought I'd cleaned everything up before uninstalling the controller and installing version 1.1.5
. This morning I found I didn't delete some resources. After deleting them, I patched the dbinstances
crd using the command below and now fieldexport
is working:
kubectl patch crd dbinstances.rds.services.k8s.aws --patch-file rds.services.k8s.aws_dbinstances.yaml
I didn't try patching the crd before deleting those resources, but I'm wondering of those still experiencing the problem if resources from previously installed controllers is blocking their crd patches?
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Stale issues rot after 60d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 60d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten
Describe the bug Using RDS ack version v1.1.2 .. Trying to create RDS backend and the RDS creation is successful and database is available But the FieldExport values are not populated .Which causing the application container failing to start
Kubectl describe DBInstance
Logs from RDS controller
Steps to reproduce The CRD is updated $ kubectl get crds | grep dbinstances
But the RDS controller version v1.1.2 was updated a month ago But the CRD is not updated bcoz of helm3 limitation
Expected outcome I was expecting these logs from RDS controller
I know Just bcoz the resource sync status
FALSE
But wat it really made toNOT
sync bcoz of oudated CRDI used this repo to download the CRDS. Then ran below command to fix this issue . This updated the CRD
I got in touch with #aws-controllers-k8s channel on the Kubernetes Slack community
Heard from ppl that the CRD is backward compatible But it's not . Bcoz the OLD crd was missing some SPEC information like
caCertificateIdentifier
&StorageThroughPut
So which means do i have to fix using the patch command . Please advise
Environment
1.24