Closed emaincourt closed 5 days ago
Why do you add a controller parameter?
I suppose it's better to use an annotation like aws-global-accelerator-controller.h3poteto.dev/route53-hostname
. If you use route53-zone-id
parameter, this controller can handle only one route53 record. If we manage multiple accelerators by this controller, we can't use this flag.
I see what you mean. That’s definitely a possibility. It should be fairly easy to rewrite.
The main drawback with that approach is that each and every ingress that the controller manages the records for will need to be aware of the zone id, instead of centralizing the information. External DNS uses ingress annotation filters to match the ones it should manage. A deployment for both public and private zones should then be created.
What do you think of being able to filter at the controller level based on the type (private or public) of the zones, same as External DNS ? That way you could manage multiple domains with the same controller, while always targeting the right zone.
Ah, you mean
--route53-zone-ids
parameter for the controllerThat's a good idea.
If we don't specify --route53-zone-ids
, please check all route53 zones. It means the default --route53-zone-ids
is *
.
I believe we can actually even omit the ids of the route53 zones. I created a POC branch here that I deployed to a development cluster to test the behavior: https://github.com/omi-lab/aws-global-accelerator-controller/commit/eb9a603261380729bb64106ea9448766aacd7895.
Basically it implements exactly the same logic as external dns:
annotation-filter
flag, which would filter services/ingresses based on a given set of annotations. Assuming some ingresses have annotation kubernetes.io/ingress.class=public
and kubernetes.io/ingress.class=private
, you could hence tell the controller which ingresses/services it should manage.route53-zone-type
flag, which would filter the zones based on their type (public or private) when syncing the record. Worth noting that the filters do not apply to cleanup, just in case the annotations changed. In that case we want to handle all the zones and only check the ownership of the record.route53-txt-owner-id
flag, which replaces the current heritage
txt record field value and with default value aws-global-accelerator-controller
to preserve the current behavior. That way you can have multiple instances of the controller managing same or different zones. You can also safely delete the records in all zones, be them public or private, without overlapping with other deployments.To summarize:
A controller with flags --kubernetes.io/ingress.class=private
, --route53-zone-type=private
, --route53-txt-owner-id=aws-global-accelerator-controller-private
would control records for all ingresses/services managed by reverse proxy private
, for all domains, in private zones only.
A controller with flags --kubernetes.io/ingress.class=public
, --route53-zone-type=public
, --route53-txt-owner-id=aws-global-accelerator-controller-private
would control records for all ingresses/services managed by reverse proxy public
, for all domains, in public zones only.
If you prefer to go with a static list of zone ids, I can also implement that. What do you think ?
If you prefer to go with a static list of zone ids
Hmm, I prefer to go with a static list of zone ids, because it can only handle public or private route53 zones when we specify --route53-zone-type
flag. Users will manage multiple accelerators and route53 records by one aws-global-accelerator-controller, so it should be able to handle public and private route53 zones simultaneously.
Got it. What about filtering the ingresses/services ?
What about filtering the ingresses/services ?
Isn't it enough to have a hosted-zone filter? I think it's better to manage it with annotation rather than controller flags.
Assuming you have records that are both public and private, and others that are exclusively private, or public, how would you handle it ? With hosted zone filter the controller will parse all ingresses matching the annotations, without any notion of ownership.
Ah, you mean that if I have
kubernetes.io/ingress.class=public
and kubernetes.io/ingress.class=private
*.example.net
example.net
this controller can't decide which hosted zone should be used for an ingress. This controller doesn't know which hosted zone should public ingress use. Correct?
If so, is it easy to add annotation instead of controller flags? To archive this requirement, we need two controller flags:
--kubernetes.io/ingress.class
-> to filter public
or private
ingress--route53-zone-ids
-> allowed list of route53 hosted zonesHowever, it is enough if we use an annotation, for example
aws-global-accelerator-controller.h3poteto.dev/route53-zone-id
annotation to the Ingress or Service.The controller can decide which hosted zone should be used for the Ingress/Service regardless of whether it is public or private. What do you think?
Adding the annotation is definitely an easy change. As I mentioned earlier in our conversation, the only drawback is that it requires an update of all the ingresses of the cluster, plus all ingresses to be aware of the zone id. Filtering based on the ingress class makes is more "agnostic" and allows for automatic discovery of the zones. It's up to you.
OK, I got it. Please proceed by adding annotations.
Hi,
Currently, the controller identifies the Route53 zone it needs to handle by selecting the first one that matches the hostnames. Since we have both a private and a public zone for each DNS name, this approach leads to random and unpredictable behavior.
This pull request introduces a
route53-zone-id
flag, allowing users to specify the exact zone they want to target. This enhancement is essential for us to reliably use the controller in our production environments and also helps to reduce the required IAM permissions.This change does not address any existing PR. If the implementation does not meet your requirements or if you believe this use case is not relevant, please let me know.
Thank you for developing this controller—it has been invaluable in managing GA across our multi-region deployments.