Open nab-gha opened 3 years ago
When creating an EKS cluster, if you specify that the API server should only be accessible via the private network then the CAPI/CAPA controllers running on the management cluster will not be able to access the API server of the tenant cluster being created. The CAPI controller on the management cluster requires access to the tenant cluster's API in order to deploy ClusterResourceSets and the CAPA cluster requires access to complete the cluster deployment.
Therefore it is necessary to ensure the management cluster has access to the tenant clusters private API endpoint. If the management cluster is running outside the AWS environment then it will be necessary to provide access to the AWS VPC private network using VPN access. This scenario is out of scope for this ticket, see #2504
If the management cluster is running in AWS then the recommended approach is to establish VPC peering between the management cluster vpc and the tenant cluster VPC. This can be achieved by following the instructions at https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
Request a peering connection between the management cluster VPC and the tenant cluster VPC then accept the peering request using the account the tenant cluster is running under.
Note that the private address blocks for the management and tenant cluster VPCs must not overlap. This can be set in the AWSManagedControlPlane
specification...
kind: AWSManagedControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
metadata:
name: "tenant01-control-plane"
spec:
networkSpec:
vpc:
cidrBlock: "10.201.0.0/16"
endpointAccess:
public: false
private: true
Routes from the management cluster VPC to the tenant cluster VPC and tenant cluster VPC to management cluster VPC must be established.
For each private subnet in each subnet in the management cluster VPC add an additional rule to the existing route specifying the CIDR of the tenant VPC and the peering connection as the target.
For each private subnet in each subnet in the tenant cluster VPC add an additional rule to the existing route specifying the CIDR of the management VPC and the peering connection as the target.
An additional ingress rule will need to be added to the tenant cluster control plane security group to allow access from the management cluster.
This needs to be added to docs, when I or someone else has time
/help /milestone v0.7.x
@sedefsavas: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
I can add it to the docs if required.
/assign @sayantani11
@paulcarlton-ww Can you specify which docs require the addition?
@sayantani11 I think it needs a new section which describes how to configure a cluster to run without exposing the apiserver publicly. This is dependent on landing #2514 which I may get time to pick up again soon
@paulcarlton-ww Yeah I was thinking the same. Did the change occur after shifting to v1Aplha4?
PR #2514 was started prior to v1Alpha4 but will need to be rebased for v1Alphav4
/priority backlog /triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/kind feature
Describe the solution you'd like See #2465, the management cluster requires access to the tenant cluster api and if that is configured to be private access only the management clusters vpc needs to be peered with the tenant cluster's vpc.
The establishing of the required vpc peering needs to be performed by the user. We should document this process and the reasons why it is required.