Closed raghulkrishna closed 2 years ago
Hello...
Did you try to use the "domain-filter" option? https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md
The aws-zone-type=private will filter the domain "internal.company.com"
And the --aws-zone-type=public will filter the domain "company.com".
This way each External DNS would know which URL it will control.
@pitinga yes tried that doesn't seems to be working for the aws-zone-type both are same domain some needs to got to private zone and some to pulblic zone
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Which is the status of this issue? I'm having exactly the same behaviour
External DNS builds the ingress in both zones private and public. I have this kind of manifest, which creates entry in both zones:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-central-1:xxxxx:certificate/xxxxx,
arn:aws:acm:eu-central-1:xxxxxx:certificate/xxxxxxx
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "60"
alb.ingress.kubernetes.io/healthcheck-path: /actuator/health
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "50"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=10
alb.ingress.kubernetes.io/unhealthy-threshold-count: "5"
alb.ingress.kubernetes.io/waf-acl-id: xxxxxxx
kubernetes.io/ingress.class: alb
labels:
project: test
name: test-ing
namespace: test
spec:
rules:
- host: my.host
We have 2 controllers of external dns, one for the public hosted zone and the other one for the private hosted zone:
Args:
--source=service
--source=ingress
--domain-filter=OURDNS
--provider=aws
--policy=upsert-only
--registry=txt
--aws-zone-type=private
--annotation-filter=kubernetes.io/ingress.class=alb
--txt-owner-id=OURPRIVATEID
Args:
--source=service
--source=ingress
--domain-filter=OURDNS
--provider=aws
--policy=upsert-only
--annotation-filter=kubernetes.io/ingress.class=alb
--aws-zone-type=public
--registry=txt
--txt-owner-id=OURPUBLICID
I would like to create a new entry on route53 depending on the ingress annotation alb.ingress.kubernetes.io/scheme:
did you find anything @raghulkrishna ?
Hi @raghulkrishna ,
I found my error, I had both controllers (public and private) with the filter: --annotation-filter=kubernetes.io/ingress.class=alb
and each ingress with this annotation too.
Modifying this annotation to --annotation-filter=kubernetes.io/ingress.class=internal-alb
the problem was resolved.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
What happened: i created two external dns config and deployment one with arg --aws-zone-type=public and other with --aws-zone-type=private. but when i mention private external dns in ingress creates records both in private and pulbic zone and vice versa. is this expected behaviour or any workaround for this?
Environment:
external-dns --version
):0.71*configuration helm install external-dns stevehipwell/external-dns \ --set provider=aws \ --set source=ingress \ --set policy=sync \ --set registry=txt \ --set txtOwnerId=my-hostedzone-identifier \ --set interval=30s \ --set aws-zone-type=private\ --set rbac.create=true \ --set rbac.serviceAccountName=external-dns \ --set rbac.serviceAccountAnnotations.eks.amazonaws.com/role-arn= helm install publicexternal-dns stevehipwell/external-dns \ --set provider=aws \ --set source=ingress \ --set policy=sync \ --set registry=txt \ --set txtOwnerId=my-hostedzone-identifier \ --set interval=30s \ --set --aws-zone-type=public\ --set rbac.create=true \ --set rbac.serviceAccountName=publicexternal-dns \ --set rbac.serviceAccountAnnotations.eks.amazonaws.com/role-arn=