Open TorstenD-SAP opened 9 months ago
This issue or PR has been automatically marked as stale due to the lack of recent activity. Thank you for your contributions.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
If you think that I work incorrectly, kindly raise an issue with the problem.
/lifecycle stale
A label seed.gardener.cloud/region
was added to each Gardener seed. This label can be used to restrict the seeds allowed for a shoot cluster by using the spec.seedSelector
in the shoot spec.
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
We agreed with @kyma-project/gopher to offer this feature under following constraints:
I have tested seedSelector
field mentioned in https://github.com/kyma-project/kyma/issues/18182#issuecomment-1888624492. @PK85 suggested that for our test scenario we should select a shoot region that does not contain any seeds in it - ap-northeast-1
. Just by creating a shoot with default configuration it got assigned a aws-ha-us2
region. Creation of another shoot with seedCluster
set to ap-northeast-1
resulted in the following status:
*Status*
Create Pending
*Last Message*
Failed to schedule Shoot: none out of the ... seeds has the matching labels required by
seed selector of 'Shoot' (selector: 'seed.gardener.cloud/region=ap-northeast-1')
Status: Create Pending
seems counterintuitive to @kyma-project/gopher and @kyma-project/framefrog and will be consulted with Gardener Team.
Proposed request sent to Provisioner's graphql API with new field shootAndSeedSameRegion:
{
runtimeInput: {
...
},
clusterConfig:{
gardenerConfig: {
...
shootAndSeedSameRegion: false (default) | true,
},
...
},
}
JFYI - added a draft PR for Gardener to extract the Seed determining logic into separate struct to make it reusable for other apps over their API:
No relevant, see https://github.com/kyma-project/kyma/issues/18182#issuecomment-2145034491.
Two additional tests cases conducted regarding Gardener's spec.controlPlane.highAvailability.failureTolerance.type: zone
and seedSelector. From the gardener documentation https://gardener.cloud/docs/gardener/high-availability/ we learn that:
Regarding the seed cluster selection, the only constraint is that shoot clusters with failure tolerance type zone are only allowed to run on seed clusters with at least three zones. All other shoot clusters (non-HA or those with failure tolerance type node) can run on seed clusters with any number of zones.
Case I - Creating a non-HA shoot on a region that only contains HA seeds - contains HA in its name
Provider: aws
Seed Selector: eu-north-1 - a region with two HA seeds
HA options: spec.controlPlane.highAvailability.failureTolerance.type: zone
not set
Result: shoot gets created successfully.
Case II - Creating a HA shoot on a region that only contains non-HA seeds - no HA in its name
Provider: gcp
Seed Selector: europe-west-3 - a region with one non-HA seed
HA options: spec.controlPlane.highAvailability.failureTolerance.type: zone
enabled
Result:
Create Pending - Failed to schedule Shoot: 0/1 seed cluster candidate(s) are eligible for scheduling: {*** => shoot does not tolerate the seed's taints}
Case III - Creating a HA shot in the region that contains one HA seed - contains HA in its name
Provider: gcp
Seed Selector: me-central2 - a region with one HA seed
HA options: spec.controlPlane.highAvailability.failureTolerance.type: zone
enabled
Result:
Create Pending - Failed to schedule Shoot: 0/1 seed cluster candidate(s) are eligible for scheduling: {*** => shoot does not tolerate the seed's taints}
Rendering of schema changes:
Tests for seed selection process when provisioning shoots in high availability configuration (documented in https://github.com/kyma-project/kyma/issues/18182#issuecomment-2133446254) assumed that seeds that contained ha in their name (e.g. aws-ha-eu3) are specially designed to serve HA configuration. This is incorrect. Seeds with such names are just results of old naming conventions. All seeds with at least three zones are able to handle ha control plane deployments. Additionally, there is also visible property that restricts number of seeds available for scheduling. At time of writing the comment all of seeds were deployed across three zones.
As of today we have implemented KEB part for Provisioner. @kyma-project/gopher are waiting for KIM implementation.
Appendix - some more background information related to this issue:
Customer reported bug Slack Thread on #kyma-team Slack Thread on #sap-tech-gardener-live
Description
The user who creates a Kyma cluster in the BTP cockpit should be able to enforce the location of the Control Plane to be in the same region as the Hyperscaler account where the Worker Nodes of the cluster are deployed. If it is not possible to have the Control Plane in the same region, the user should see an error message allowing him to proceed without this enforcement. In all cases it has to be transparent to the customer in which region the Control Plane is hosted.
Reasons
The region of the Control Plane is automatically chosen by Gardener (https://gardener.cloud/docs/gardener/concepts/scheduler/). Because of this the Control Plane could sometimes be deployed in a different region than the worker nodes, among others because Gardener doesn't have Seed clusters in all the regions Kyma can be deployed. This can lead to a violation of the law because the Control Plane could be in another legal area than the Worker Nodes and the customer is storing personal data (e. g. names, email addresses) on the Control Plane. We also have customers which are very sensitive regarding the regions where sensitive data is stored.
AC (Added by PK)