Open ghostsquad opened 4 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Not stale
/remove-lifecycle stale
Have the same issue:
time="2020-10-28T01:57:52Z" level=debug msg="No endpoints could be generated from service ns1/go-app"
time="2020-10-28T01:57:52Z" level=debug msg="No endpoints could be generated from ingress ns1/go-app"
no idea how does it come to that conclusion when the endpoints clearly exist:
$ kc describe svc -n ns1 go-app | grep Endpoints
Endpoints: 100.117.196.198:8081,100.117.234.8:8081
$ kc get endpoints -n ns1 --show-labels
NAME ENDPOINTS AGE LABELS
go-app 100.117.196.198:8081,100.117.234.8:8081 17h app=go-app,dns=route53,name=go-app
This is k8s v1.18.10
and image k8s.gcr.io/external-dns/external-dns:v0.7.4
Dito, anyone got any idea how to get external DNS working, I've followed the instructions here...https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/azure.md
same issue
time="2021-03-03T08:44:08Z" level=debug msg="No endpoints could be generated from ingress harbor/harbor-harbor-ingress"
time="2021-03-03T08:44:08Z" level=debug msg="No endpoints could be generated from ingress harbor/harbor-harbor-ingress-notary"
time="2021-03-03T08:44:08Z" level=debug msg="No endpoints could be generated from ingress jaeger/jaeger-ingress"
The generation of endpoints seems to be directly tied to the Ingress having an "Address". In my case, I was able to get this working by setting by my ingress controller to publish its address to the ingress resources
For traefik:
kubernetesIngress.publishedService.enabled=true
For nginx:
controller.publishService.enabled=true
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen
Perhaps you don't have the ingress controller installed. To install it you need to run the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
for me the problem was with the controller, basically it wasn't adding the routing info for my host should be pointing. Thanks @tadrian88
I was running into a similar problem. In my case, I'm using aws-load-balancer-controller, and the controller was failing to create my load balancer because of an invalid certificate. It would be nice if external-dns gave a slightly nicer error in the logs here, especially with debug logging turned on, but at least in my case it wasn't external-dns's fault.
The generation of endpoints seems to be directly tied to the Ingress having an "Address". In my case, I was able to get this working by setting by my ingress controller to publish its address to the ingress resources
For traefik:
kubernetesIngress.publishedService.enabled=true
For nginx:
controller.publishService.enabled=true
I have the same problem but with Istio, any solution?
What happened:
I noticed that external dns was not creating a route53 entry in AWS for an ingress that I created. I later found out that part of the ingress was malformed, in that it was pointing to a service that didn't exist. The ingress to be created is a ALB Ingress, which was successfully performed.
The following log statement is what triggered that:
What you expected to happen:
Despite an invalid backing service to the ingress, external-dns should have still creating a entry that pointed to the ALB.
How to reproduce it (as minimally and precisely as possible):
I believe (untested), if you create an ingress, that points to a non-existent service, this can be reproduced.
Anything else we need to know?:
Environment:
external-dns --version
): 0.7.1