Open willhughes-au opened 1 year ago
Hi,
I'm running 0.13.1 and its working fine.
Helm chart expects it as "extraArgs":
extraArgs:
- --exclude-domains=xxx
- --exclude-domains=xxx
Yeah, I am providing it as extraArgs in the chart.
I can see the arguments being passed to the pod. That's not the issue.
It's definitely ignoring it for me on 0.13.5.
Could this be a manifestation of https://github.com/kubernetes-sigs/external-dns/issues/3753 and https://github.com/kubernetes-sigs/external-dns/issues/3948?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
What happened:
I have an AWS Account with two Route53 zones:
When I run external-dns with
--domain-filter=example.com
it also picks up the zonesubdomain.example.com
, and I can see the following output from external-dns:Applying provider record filter for domains: [example.com. .example.com. subdomain.example.com. .subdomain.example.com.]
When I then pass in
--exclude-domains=subdomain.example.com
, I get the same output.What you expected to happen:
I expect that
--exclude-domains
(and the other filtering options as documented) would apply to the Zones method, and those Zones should not be returned.How to reproduce it (as minimally and precisely as possible):
Have an AWS Account with at least two Route53 zones, with one zone being a subdomain of another.
Anything else we need to know?:
Environment:
external-dns --version
): Running from containerregistry.k8s.io/external-dns/external-dns:v0.13.5