Open ahoehma opened 3 months ago
Seems similar to
https://github.com/kubernetes-sigs/external-dns/issues/3754
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
What happened:
Suddenly someone in the team deployed an ingress with aws elb and a hostname and this seems to crash the whole external-dns pod.
time="2024-03-22T05:13:46Z" level=error msg="Failure in zone spice.eb-dev.siemens.cloud. [Id: /hostedzone/XXXXXXXXXXXXXXXXX] when submitting change batch: InvalidChangeBatch: [Tried to create reso │ │ time="2024-03-22T05:13:46Z" level=error msg="Failed submitting change (error: InvalidChangeBatch: [Tried to create resource record set [name='cin-info-service.spice.eb-dev.xxxxx.', type='TXT'] │ │ time="2024-03-22T05:13:47Z" level=error msg="Failed submitting change (error: InvalidChangeBatch: [Tried to create resource record set [name='clm-eks.spice.eb-dev.xxxxx.', type='TXT'] but it a │ │ time="2024-03-22T05:13:48Z" level=fatal msg="failed to submit all changes for the following zones: [/hostedzone/XXXXXXXXXXXXXXXXX]"
What you expected to happen:
I would like to see more details from the pod log.
Would be cool if the external-dns don't crash because of such error. May this is normal that such errors happens?
How can I prevent this for a production system?
How to reproduce it (as minimally and precisely as possible):
Not sure. Could it be that the entries in route53 already there .. may from a previous deployment
Anything else we need to know?:
I installed external-dns via terraform aws-ia/eks-blueprints-addons/aws 1.16.1
Environment:
external-dns --version
): 0.14.0