Open randomvariable opened 2 years ago
kubeadm picked the port 6443 as the default for secure serving because that is the default in the kube-apiserver:
--secure-port int Default: 6443
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
this SO post shares some insights why 6443 might have been chosen: https://stackoverflow.com/a/44051315
...not to cause conflicts with the default Application Server domain
but as long as 443 is not occupied using it for the HTTPS seems fine.
if Cluster API wishes to switch to that i would give my tentative -1 (please feel free to ignore), because it will drift from the k8s/kubeadm default. kubeadm which tries to tightly follow the k8s defaults, will likely continue to remain on 6443 until the API server switches.
As a cluster operator, I may be working in a multi-cloud environment, and also have a directive to always have https endpoints exposed on 443. In the Kubernetes context, this means clients should only ever connect to the API server on port 443.
a minor concern here is that a specific deployment requirement (i.e. "we must use 443") should not dictate the port on core CAPI and all providers and instead perhaps that particular deployment should be configured.
Not suggesting we switch by default. Sorry, I was talking as a fictional cluster operator in a company. I'll clarify.
In fact, I think organisations who want 443 may be shooting themselves in the foot at a later date for exactly the reasons listed in the stack overflow comment.
In Azure, CAPZ assumes that if cluster.spec.clusterNetwork.APIServerPort isn't 6443, that kube-apiserver is also listening on the different port. This is kind of necessary due to the way the Azure LBs work and the workarounds required for hairpin NAT.
I don't think this needs to behave this way due to any particular Azure infra constraint. We should be able to solve for this. I'll create an issue to expose the API server LB port while keeping the API server backend port 6443, as done in CAPA.
/area control-plane
I don't think this needs to behave this way due to any particular Azure infra constraint.
Isn't there a case where because of hairpin NAT not existing, there's a hack to /etc/hosts where the LB control plane endpoint is repointed to 127.0.0.1 ?
/milestone v1.2 /kind api-change
Ideally the users need to be able to configure the local apiServer port through a single field. Today we can't really make the various bindPort
in the init and join as part of the bootstrappers contract due to the fact that they're nested into APIs and structs that are kubeadm specific.
we probably also need to disambiguate between cluster.spec.clusterNetwork.APIServerPort
and cluster.spec.ControlPlaneEndpoint.Port
An idea could be:
cluster.spec.clusterNetwork.APIServerPort
to cluster.spec.ControlPlaneEndpoint.Port
and ensure provider reads from it to figure the LB portcluster.spec.clusterNetwork.APIServerPort
cluster.spec.clusterNetwork.LocalAPIServerPort
@yastij If I got this right we are addressing two problems:
If I look at the current API:
cluster.spec.ControlPlaneEndpoint.Port
; we should probably make it clear in the doc that if the user set the Port value, the infrastructure provider should comply if this value (if supported); similar case could be done for cluster.spec.ControlPlaneEndpoint.Port
.cluster.spec.clusterNetwork.APIServerPort
; we could argue if the name is the right one, but I think the main problems here are:
In other words, would it work to keep things as they are, improve doc, improve CABPK to make UX simpler, make providers behave consistently? The main problem I see is how to avoid this to have a negative impact on existing Clusters, I need to think a little bit more about this ...
/reopen pressed the wrong button, sorry
@fabriziopandini: Reopened this issue.
After thinking about avoiding negative impact, I think that a possible way out is to:
cluster.spec.clusterNetwork.LocalAPIServerPort
or cluster.spec.clusterNetwork.APIServerBindPort
to v1beta1 Cluster, setting up a clear semantic about it:
initConfiguration/JoinConfiguration.localAPIEndpoint.bindPort
cluster.spec.clusterNetwork.APIServerPort
(TBD exact behavior for conversion after removal, might be convert to cluster.spec.ControlPlaneEndpoint.Port if empty)providers on their side must:
cluster.spec.clusterNetwork.LocalAPIServerPort
for load balancers backEndscluster.spec.ControlPlaneEndpoint.Port
for load balancers FrontEnds (while continue using cluster.spec.clusterNetwork.APIServerPort
as override for avoding breaking changes, till the field is removed)providers on their side must [...] start using cluster.spec.ControlPlaneEndpoint.Port for load balancers FrontEnds
what about the infrastructure provider contract the spec object must have the following fields defined: controlPlaneEndpoint - identifies the endpoint used to connect to the target’s cluster apiserver.
?
@yastij ^^ That's a good point. Taking a step back, we have a couple of user stories, in some cases not mutually exclusive:
In general it sounds good to me to have fields with a clear semantic that cover our relevant use cases on the Cluster object and then infer the corresponding values in CABPK (if they are set, otherwise use defaults).
/assign /lifecycle active
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
/triage accepted
(doing some cleanup on old issues without updates) /unassign @yastij /help
@fabriziopandini: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/priority important-longterm
The current status, with different interpretations of those fields in different providers is not ideal.
We need a volounteer that does some research across providers and come up with some idea (eventually a proposal / an improvement to the contract) to improve the situation. Also the Flexible Managed Kubernetes Endpoints proposal should be taken into account
However, since there is almost no activity on the issue, we are decreasing priority
/priority backlog /triage accepted
What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.]
A common refrain I have heard from customers:
"As a cluster operator, I may be working in a multi-cloud environment, and also have a directive to always have https endpoints exposed on 443. In the Kubernetes context, this means clients should only ever connect to the API server on port 443."
In Cluster API, we have a number of places where the port can be changed:
cluster.spec.clusterNetwork.APIServerPort
This should be used by infrastructure providers to configure the load balancer port.kcp.spec.kubeadmConfigSpec.initConfiguration.localAPIEndpoint.bindPort
This will change the port that kube-apiserver will listen on each individual control plane node, and is used by kubeadmIn AWS, CAPA will use
cluster.spec.clusterNetwork.APIServerPort
to set the LB port and assume kube-apiserver continues to listen on 6443.In Azure, CAPZ assumes that if
cluster.spec.clusterNetwork.APIServerPort
isn't 6443, that kube-apiserver is also listening on the different port. This is kind of necessary due to the way the Azure LBs work and the workarounds required for hairpin NAT.Importantly, in bare-metal / on-prem environments like vSphere/Tinkerbell, we might be using kube-vip to make our control plane highly available. In this circumstance, there is no LB, and
cluster.spec.clusterNetwork.APIServerPort
is ignored.What did you expect to happen:
One consistent place.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Enforcing any particular implied behaviour may break one or more providers as well as clusters on upgrade, so should not be done within a v1beta1 release IMO.
Also, not suggesting we change the default port, but make it easier to change the port and have a defined behaviour.
Environment:
kubectl version
):/etc/os-release
):/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]