kubernetes / cloud-provider-openstack

Apache License 2.0
599 stars 597 forks source link

[occm] multizone race condition #2590

Open sergelogvinov opened 1 month ago

sergelogvinov commented 1 month ago

We cannot definitively determine whether a node exists within our zone without the provider ID string.

What this PR does / why we need it:

It affects if OS_CCM_REGIONAL is true The node trying to join to the cluster. It has non ready status and uninitialized taint. The cloud-node controller is attempting to initialize the node simultaneously with the node-lifecycle controller's attempt to delete the node, as the node is located in another region

So we need to skip nodes without ProviderID in OS_CCM_REGIONAL mode. Azure does the same az.IsNodeUnmanaged -> [IsNodeUnmanagedByProviderID] (https://github.com/kubernetes-sigs/cloud-provider-azure/blob/master/pkg/provider/azure_wrap.go#L86) - if ProviderID is not azure string (or empty) - set it as unmanaged node.

Which issue this PR fixes(if applicable):

part of #1924

Special notes for reviewers:

Release note:

OS_CCM_REGIONAL is alpha feature, we do not need to update release note. Feature flag was introduced here #1909

NONE
linux-foundation-easycla[bot] commented 1 month ago

CLA Signed

The committers listed above are authorized under a signed CLA.

k8s-ci-robot commented 1 month ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign anguslees for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[pkg/openstack/OWNERS](https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
k8s-ci-robot commented 1 month ago

Hi @sergelogvinov. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
mdbooth commented 1 month ago

I feel like the underlying problem here is that getInstance doesn't support multiple regions when providerID is not set: https://github.com/sergelogvinov/cloud-provider-openstack/blob/3c97ceb61c74d4a58716b1d3109169d05451ea67/pkg/openstack/instancesv2.go#L153-L175

Would it be safer and less prone to hard-to-fix future edge cases to address that instead?

sergelogvinov commented 1 month ago

My changes based on these thoughts:

If we decide to add a new region into the existing cluster, we'll need to redeploy the CCM with the OS_CCM_REGIONAL flag. Subsequently, all new nodes will be assigned a region name in their ProviderID string.

CCM will use default region (regionA) for nodes lacking a region in their ProviderID. Therefore, two instances of CCM, each configured with different region names, will operate independently. CCM-regionA will locate instances by ProviderID regardless of region presence, while CCM-regionB will only recognize nodes with a region name in their ProviderID (as all nodes in regionB will have this).

mdbooth commented 1 month ago

Got it. However, this would still be baking logic into the CCM based on a particular deployment strategy. I think it would still address your use case if we, e.g. updated getInstance to be multi-region aware in the event that providerID is not set. This would also mean that we would not be asserting that an instance exists when we don't know that it does. I can't help but feel that the fact this this happens not to break anything currently is not something we should rely on indefinitely.

mdbooth commented 1 month ago

IIUC, the situation you're describing is one where we have multiple CCMs: one per region. The motivation for doing this is...

OpenStack CCM is not multi-region aware? Is there any other reason?

My initial thought was that it would be required for HA, but as long as your control plane is multi-region, a failure of a complete region where the CCM is running would result in the CCM running in another region.

It's not to reduce east-west traffic, either, because the CCM is only talking to KAS, and etcd data is presumably replicated across your control plane anyway.

That isn't quite where I thought I was going with this comment, btw. I just thought I'd state the motivation for wanting to do this first, only to realise I didn't understand it.

Regardless, I think this is fundamentally a design question of supporting multiple multiple CCMs in a single cluster. I wonder if the node folks have any existing guidance on this. I vaguely recall having discussed this before. My impressions from that discussion, which would have been around the time of the external cloud-provider split, where:

Still, I wonder if it's worth reaching out to those folks. Potential outcomes:

Not sure what the forum to raise this is, tbh. I wonder if @aojea, @danwinship, or @andrewsykim could signpost us appropriately?

sergelogvinov commented 1 month ago

I agree with you. Multi-region or hybrid cloud setups introduce complexity. Determining the affiliation of uninitialized nodes becomes challenging.

I've conducted extensive research and experimented with various setups, including multi-region or hybrid Kubernetes clusters. For instance https://github.com/sergelogvinov/terraform-talos?tab=readme-ov-file#multi-cloud-compatibility

Currently, I've been working on a branch OCCM that supports multi-region OpenStack setups, particularly for node and node-lifecycle controllers. However, configuring routes and services in such setups can be quite challenging. I believe that CSI-cinder can also be adapted for multi-region deployments. I had initially planned to introduce it after this PR to simplify the review process.

PS. I've developed my own multi-regional implementation of CCM/CSI for Proxmox, Proxmox CSI Plugin and Proxmox Cloud Controller Manager. After about a year of production experience with my implementation, I'm confident in its viability.

aojea commented 1 month ago

definitely a topic to discuss in sig-cloud-provider , we recently had a long discussion on the provider ID topic that seems to not be well specified https://github.com/kubernetes/kubernetes/pull/123713 so I will also validate all the assumptions so we not make any assumption now that is not correct

sergelogvinov commented 1 month ago

definitely a topic to discuss in sig-cloud-provider , we recently had a long discussion on the provider ID topic that seems to not be well specified kubernetes/kubernetes#123713 so I will also validate all the assumptions so we not make any assumption now that is not correct

this changes affect only node-lifecycle controller, but discussion kubernetes/kubernetes#123713 was about node-controller

PS. With this patch a node without a ProviderID can never be deleted...

mdbooth commented 1 month ago

The architecture here is:

Is that right?

Do the per-region CCMs actually shard the Nodes, or do we just ensure the responses from InstancesV2 are such that the controller will never take any action for a Node which is not in its region?

mdbooth commented 1 month ago

Have you considered having kubelet set the providerID on Node creation, btw? Here's how CAPO does this:

https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/1e0b60fd0ffd7fc1e940ada7e85ec8d724381146/templates/cluster-template.yaml#L21-L24

This is used by kubeadm when it creates the service which runs kubelet. It gets the instance-id from the local metadata service.

Here's how we do it in OpenShift... for AWS unfortunately but the principal is the same (I'm convinced we did this for OpenStack too... may have to fix that 🤔):

https://github.com/openshift/machine-config-operator/blob/master/templates/common/aws/files/usr-local-bin-aws-kubelet-providerid.yaml

That unit runs on boot and sets a KUBELET_PROVIDERID environment variable which the kubelet unit uses on startup. Again it uses the local (AWS) metadata service to get the instanceID, and AZ in that case.

Unfortunately I don't think OpenStack region is available via the metadata service. However, a workaround for this might be to have a separate template for each region where the region was hard-coded and it just filled in instanceID from the metadata service.

mdbooth commented 1 month ago

Coming back to this patch specifically, I think I'd be more comfortable proposing a patch in k/k similar to https://github.com/kubernetes/kubernetes/pull/123713 which causes the node-lifecycle controller to have some explicit behaviour (e.g. ignore) for Nodes which don't have a providerID.

I continue to worry that in this PR we are mis-reporting to cloud-provider on the basis that we expect cloud-provider to have a certain behaviour. I think that will be fragile and hard to maintain.

dulek commented 1 month ago

/ok-to-test

k8s-ci-robot commented 1 month ago

@sergelogvinov: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-cloud-provider-openstack-check a2ca362dc3c2e5417c80deb4ecd26d5c56dceb09 link true /test pull-cloud-provider-openstack-check

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository. I understand the commands that are listed [here](https://go.k8s.io/bot-commands).