We would like to be able to offer the multi-cluster load balancning options for DNSPolicy to users without them needing to use Open Cluster Management.
Why
We want to retain our integration there but try to reduce the barrier to entry for smaller deployments and local setups. OCM provides a great set of APIs but adds a lot of overhead (not least a separate hub cluster) and so could be a reason users choose some other solution. In addition migration from single to multi-cluster where the cluster can essentially be mirrored is a compelling story that stops the need for teams and users to learn a whole new set of APIS
Additional Considerations
How should health checks work?
PoCs/Options to explore
1) leverage k8gb.io. In a similar vein to how we want DNSPolicy to work for single cluster use cases:
where we allow a provider such as external-dns to be specified and then our policy controller would create the required DNSRecord based on the policy definition design doc, I think we could also look to see can we achieve something similar with k8gb.io. Where if a user has k8gb.io installed they can define a DNSPolicy and our controller will instead of interacting directly with AWS etc, instead create the correct resources for k8gb
In a similar vein to External-DNS we could look to use the zone as a registry and use it to coordinate between multiple instances of the policy controller leveraging txt records
One concept (lots of caveats)
Each instance would be started with a record prefix this prefix would be common across clusters that want to lb
Each instance would be started with a 'clusterid'
Each instance would attempt and continue to attempt to become the leader by creating a txt record with its clusterid and a timestamp
the current leader would reconcile any shared records and defaults plus its own A/CNAMEs
each other instance would reconcile only their A and CNAME records and geo records
shop.example.com CNAME lb-.shop.example.com (leader)
lb-.shop.example.com CNAME geolocation ireland ie.lb-.shop.example.com (IE based instances)
lb-.shop.example.com geolocation australia aus.lb-.shop.example.com (AUS based instances)
lb-.shop.example.com geolocation default ie.lb-.shop.example.com (set by the default geo option) (leader)
ie.lb-.shop.example.com CNAME weighted 100 .lb-.shop.example.com (IE cluster instance with IP)
ie.lb-.shop.example.com CNAME weighted 100 aws.lb.com (specific Ie instances where the host was aws.lb.com)
aus.lb-.shop.example.com CNAME weighted 100 ab2.lb-.shop.example.com (AUS instance)
aus.lb-.shop.example.com CNAME weighted 100 ab3.lb-.shop.example.com (AUS instance)
-.shop.example.com A 192.22.2.1 192.22.2.5
ab2.lb-.shop.example.com A 192.22.2.3
ab3.lb-.shop.example.com A 192.22.2.4
What
We would like to be able to offer the multi-cluster load balancning options for
DNSPolicy
to users without them needing to use Open Cluster Management.Why
We want to retain our integration there but try to reduce the barrier to entry for smaller deployments and local setups. OCM provides a great set of APIs but adds a lot of overhead (not least a separate hub cluster) and so could be a reason users choose some other solution. In addition migration from single to multi-cluster where the cluster can essentially be mirrored is a compelling story that stops the need for teams and users to learn a whole new set of APIS
Additional Considerations
How should health checks work?
PoCs/Options to explore
1) leverage k8gb.io. In a similar vein to how we want
DNSPolicy
to work for single cluster use cases:where we allow a provider such as
external-dns
to be specified and then our policy controller would create the requiredDNSRecord
based on the policy definition design doc, I think we could also look to see can we achieve something similar with k8gb.io. Where if a user has k8gb.io installed they can define aDNSPolicy
and our controller will instead of interacting directly with AWS etc, instead create the correct resources for k8gb2) coordinate with the zone
In a similar vein to External-DNS we could look to use the zone as a registry and use it to coordinate between multiple instances of the policy controller leveraging txt records
One concept (lots of caveats)
shop.example.com CNAME lb-.shop.example.com (.shop.example.com CNAME geolocation ireland ie.lb-.shop.example.com (IE based instances)
lb-.shop.example.com geolocation australia aus.lb-.shop.example.com (AUS based instances)
lb-.shop.example.com geolocation default ie.lb-.shop.example.com (set by the default geo option) (.shop.example.com CNAME weighted 100 .lb-.shop.example.com (IE cluster instance with IP)
ie.lb-.shop.example.com CNAME weighted 100 aws.lb.com (specific Ie instances where the host was aws.lb.com)
aus.lb-.shop.example.com CNAME weighted 100 ab2.lb-.shop.example.com (AUS instance)
aus.lb-.shop.example.com CNAME weighted 100 ab3.lb-.shop.example.com (AUS instance)
leader
) lb-leader
) ie.lb-