Open iameli opened 2 years ago
This is not a bug. You specified service
as the source type, by default all the service types are considered as valid sources, including NodePort
services. The target for the DNS record generated for the service of type NodePort
will contain the IPs of all the nodes, hence the need to request Node
resource from the api server.
Excluding NodePort
services from the service types is supposed to help to not try to request Node
resource.
This is not a bug. You specified
service
as the source type, by default all the service types are considered as valid sources, includingNodePort
services. The target for the DNS record generated for the service of typeNodePort
will contain the IPs of all the nodes, hence the need to requestNode
resource from the api server. ExcludingNodePort
services from the service types is supposed to help to not try to requestNode
resource.
Same error when using --service-type-filter=LoadBalancer --source=service
on my end.
Yeah, from glancing through the code the request for nodes appears to be unconditional; unless I'm misunderstanding something --source=service
will always require a search for nodes.
Seems like better behavior would be to not require nodes but then print an error if there are any NodePort or headless ClusterIP services that would require node information?
FWIW, for my use case, we were able to just migrate entirely to using --source=ingress
which is now working fine.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Bug isn't a huge deal, but my comment here would still be an improvement in behavior: https://github.com/kubernetes-sigs/external-dns/issues/3169#issuecomment-1327855220
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I have encountered this as well. Maybe updating the docs that the service
source is not supported when namespaced would help.
I also encountered an offshoot of this issue in a setup of my own, #4834. Agree that the docs could be better here.
What happened:
I ran external-dns like so:
And then it times out trying to get nodes:
What you expected to happen:
--source=node
or--publish-node-ip
or anything like that. Why is it requesting nodes?kubectl
. Why is external-dns timing out?Versions: