ceph / ceph-csi

CSI driver for Ceph
Apache License 2.0
1.19k stars 528 forks source link

Support for multiple Ceph clusters with the same StorageClass based on node topology labels #4611

Closed diogenxs closed 2 weeks ago

diogenxs commented 1 month ago

Describe the feature you'd like to have

I would like to be able to connect to different Ceph clusters based on node topology labels rather than being restricted to a single clusterID per StorageClass. This feature should allow clusterID to be defined within each pool configuration under a common StorageClass, leveraging topologyConstrainedPools.

What is the value to the end user? (why is it a priority?)

This feature would enable end users who manage multiple Ceph clusters across various topologies to utilize a single StorageClass configuration.

How will we know we have a good solution? (acceptance criteria)

Users can specify multiple Ceph clusters within a single StorageClass, associated with different topology labels. ceph-csi can dynamically determine the correct Ceph cluster to interact with based on the node's topology label during volume provisioning.

apiVersion: storage.k8s.io/v1
kind: StorageClass
parameters:
...
  topologyConstrainedPools: [
      {
        "clusterID": "east",
        "poolName":"pool0",
        "dataPool":"ec-pool0" # optional, erasure-coded pool for data
        "domainSegments":[
          {"domainLabel":"region","value":"east"},
          {"domainLabel":"zone","value":"zone1"}
        ]
      },
      {
        "clusterID": "east",
        "poolName":"pool1",
        "dataPool":"ec-pool1" # optional, erasure-coded pool for data
        "domainSegments":[
          {"domainLabel":"region","value":"east"},
          {"domainLabel":"zone","value":"zone2"}
        ]
      },
      {
        "clusterID": "west",
        "poolName":"pool2",
        "dataPool":"ec-pool2" # optional, erasure-coded pool for data
        "domainSegments":[
          {"domainLabel":"region","value":"west"},
          {"domainLabel":"zone","value":"zone1"}
        ]
      }
    ]

Additional context

https://ceph-storage.slack.com/archives/C05522L7P60/p1715118579305879

github-actions[bot] commented 3 weeks ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

github-actions[bot] commented 2 weeks ago

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.