In TLS communication, the client accessing a service via ClusterLink gateways requires that the local name resolved appears in the certificate presented by the server.
For k8s services, it means we need to keep the source and destination DNS the same (i.e., maintain the name and namespace between clusters). This may require cluster wide privileges (i.e., create a service in a different namespace than the one the gateway runs in).
This is even more apparent when importing a service that is external to a cluster and uses TLS, The client needs to resolve the name as it appears in the certificate (e.g., api.example.com) to the local gateway. Note that a secondary local domain (e.g., "multicluster.local" or "foo.io") would not work as it too would not be present in the cloud service's certificate.
Kubernetes allows this via a few mechanisms:
manually (or via Admission controller) the DNS configuration of the client Pod to use a special DNS server that can do the overrides
change the CoreDNS configuration to rewrite/resolve external names in a special way
For reference on changing Pod DNS see here. For examples manipulating CoreDNS see here and here
In addition, we would need to extend the service import (or binding) to have an optional DNS alias name that includes the expected client DNS entry.
The hope is that the above also can be used to resolve issues with protocols that seed client with bootstrap servers which are in turn used to get the full list of servers (e.g., Kafka does that). In that case, we would need to alias the full list of servers that can be discovered by the client
In TLS communication, the client accessing a service via ClusterLink gateways requires that the local name resolved appears in the certificate presented by the server.
For k8s services, it means we need to keep the source and destination DNS the same (i.e., maintain the name and namespace between clusters). This may require cluster wide privileges (i.e., create a service in a different namespace than the one the gateway runs in). This is even more apparent when importing a service that is external to a cluster and uses TLS, The client needs to resolve the name as it appears in the certificate (e.g., api.example.com) to the local gateway. Note that a secondary local domain (e.g., "multicluster.local" or "foo.io") would not work as it too would not be present in the cloud service's certificate.
Kubernetes allows this via a few mechanisms:
For reference on changing Pod DNS see here. For examples manipulating CoreDNS see here and here
In addition, we would need to extend the service import (or binding) to have an optional DNS alias name that includes the expected client DNS entry. The hope is that the above also can be used to resolve issues with protocols that seed client with bootstrap servers which are in turn used to get the full list of servers (e.g., Kafka does that). In that case, we would need to alias the full list of servers that can be discovered by the client