Closed cambierr closed 3 years ago
@cambierr Hi, are you running a global Keystone service or separate ones? and Nova, Octavia?
Multi-regions is complex, and different deployers have different deployment models. For example, in our case, we have 3 regions but using a global Keystone, Nova services are separate between regions which makes it impossible to run a k8s cluster across regions.
@lingxiankong in our case, we have a single Keystone instances that's common for all regions, then separate other services such as nova, cinder, ...
A very easy way to solve this from my low knowledge of the code could be to add a regions
property with value region-a,region-b,region-c
and query them all with the instance ID deduced from the instance metadata.
The only limitation I can see here is that we need instance names to be unique across regions ?
I know a "normal" configuration for this would be a single OpenStack cluster with multiple AZs but I have no power on the OpenStack cluster management :( (on prem, managed by customers, or cloud, managed by OVH)
@cambierr More complex than that, how instances located in different regions talk to each other(via fixed IPs)? How Octavia amphorae in one region talk to instances in others? I'm not sure neither if cinder-csi is region aware.
Well, cinder-csi is availability-zone aware... porting this to region-aware should not be that complicated I guess.
When it comes to networking, all machines are in the same networks across regions. For instance, via OVH vrack
I'm not familiar with how the network is managed by OVH, but e.g. you are creating a LoadBalancer type of Service, after the openstack-cloud-controller-manager creating Octavia amphora in region A, could the amphora talk to a k8s worker node in region B? How?
Basically, I'm not against this idea, but we will be more confident if you could provide more information. Additionally, it would be better if you could show some PoC as well.
We are working on such a PoC that starts to look good (cloud-provider and CSI plugin for cinder).
We'll share this soon so that we have some matter to discuss !
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen
I think this issue should not be closed. I'm in the same environment as @cambierr. This would be awesome!
@Sryther: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen
I think this issue should not be closed. I'm in the same environment as @cambierr. This would be awesome!
This issue has been closed automatically as it was not getting enough interests, but feel free to reopen if you have the feature requirement as well, and you (or your team) are also welcome to make some contributions.
@lingxiankong: Reopened this issue.
Today I was able "to hack" openstack-cloud-controller-manager
and cinder-csi-plugin
by changing the content of the cloud.conf:
region=A
and starting the openstack-cloud-controller-manager
and cinder-csi-plugin
daemonsets -> the nodes from region A were labelled with topology.kubernetes.io/region=A
region=B
, restart the same daemonsets -> the missing nodes are labelled with topology.kubernetes.io/region=B
and the previous ones still had topology.kubernetes.io/region=A
It didn't help me for the next steps (like mounting PersistentVolumes from different regions).
Removing the region
parameter don't do the expected behaviour aswell.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
/kind feature
From what I was able to see, there is no way to have multiple OpenStack regions in the same Kubernetes cluster... wouldn't this be a great addition to enable High-Availability ?
In my use-case, I have two regions corresponding to two datacenters that are about .600 ms away from each other and being able to run a single k8s cluster that span across both of them would be a great addition to our deployments !