Open huxcrux opened 7 months ago
This is a CAPI limitation (which may in turn be based on a kubeadm limitation?): it is only possible to configure a single control plane endpoint, so it is the public one. I completely agree that it would be ideal to have separate internal and external endpoints, but I don't think there's currently anywhere to configure them.
It's tracked here: https://github.com/kubernetes-sigs/cluster-api/issues/5295
Reading through the comments on that issue and also https://github.com/kubernetes-sigs/cluster-api/pull/8500, it sounds like some other providers may have various degrees of workaround/hack for the issue which it might be worth investigating until we can implement it properly.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/kind bug
What steps did you take and what happened: I try to implement
allowedCIDRs
for my clusters. doing this by settingallowedCIDRs
in theapiServerLoadBalancer
in the openstackcluster spec.Example:
The problem I face is that we do not use the standard neutron implementation where all nodes are using the routers IP for snat. This means when trying to bootstrap the first node it fails to start the kubelet due to connections to the API being blocked by the loadbalancer.
I could manually add all our snat pools IPs and everything works. the problem with this is that the SNAT IPs are shared between multiple customers. and even If I use the new IPAM code that is under review I would still need to manually add those IPs to the
allowedCIDRs
list.What did you expect to happen: I expect all cluster nodes to use the internal LB endpoint for api traffic. It seems a bit odd when there is an internal endpoint to use the external IP for in-cluster traffic?
Anything else you would like to add:
Another alternative is to use the IPAM ippool, the problem for this is that we need to watch another object and trigger an lb reconcile upon. However this object contains a list of valid IPs that could simply be appended. I think it would make more sense to migrate to use the internal endpoint for in-cluster api traffic.
Environment:
git rev-parse HEAD
if manually built): latest master (commit: 5cc483bfc6eae8a8b8a67b32e9b7af0bafa473ca)kubectl version
): 1.29.1/etc/os-release
): ubuntu 22.04