sbercloud-terraform / terraform-provider-sbercloud

Terraform SberCloud Provider
https://registry.terraform.io/providers/sbercloud-terraform/sbercloud/latest/docs
Mozilla Public License 2.0
34 stars 21 forks source link

StorageSelector cannot match all storage devices (current match:0),reason:BadRequest #252

Closed zradeg closed 11 months ago

zradeg commented 11 months ago

Greetings! I'm trying to use terraform to raise cce-node-pool with storage connection. I get the error:

sbercloud_cce_node_pool.infra_node_pool: Creating...
  Error: Error creating sbercloud Node Pool: Bad request with: [POST https://cce.ru-moscow-1.hc.sbercloud.ru/api/v3/projects/.../ce2/clusters/.../nodepools], error message: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","code":400,"errorCode":"CCE.01400001","errorMessage":"Invalid request.","error_code":"CCE_CM.0004","error_msg":"Request is invalid","message":"StorageSelector(test) cannot match all storage devices (current match:0)","reason":"BadRequest"}

Here is the manifest that the deployment takes place with:

resource "sbercloud_cce_cluster" "infra_cluster" {
  name                             = "cluster"
  flavor_id                        = "cce.s2.small"
  vpc_id                           = sbercloud_vpc.infra_vpc.id
  subnet_id                        = sbercloud_vpc_subnet.infra_subnet_1.id
  cluster_type                     = "VirtualMachine"
  cluster_version                  = "v1.25"
  container_network_type           = "overlay_l2"
  container_network_cidr           = "10.99.0.0/18"
  service_network_cidr             = "10.99.64.0/18"
  authentication_mode              = "rbac"
  enterprise_project_id            = "..."
  region                           = "ru-moscow-1"
  masters {
    availability_zone = "ru-moscow-1a"
  }
  masters {
    availability_zone = "ru-moscow-1b"
  }
  masters {
    availability_zone = "ru-moscow-1c"
  }
}

resource "sbercloud_cce_node_pool" "infra_node_pool" {
  cluster_id               = sbercloud_cce_cluster.infra_cluster.id
  name                     = "node-pool"
  os                       = "CentOS 7.6"
  initial_node_count       = 3
  flavor_id                = "c6.xlarge.2"
  scall_enable             = true
  min_node_count           = 3
  max_node_count           = 6
  scale_down_cooldown_time = 100
  priority                 = 1
  type                     = "vm"
  password                 = "..."

  root_volume {
    size       = 40
    volumetype = "SSD"
  }
  data_volumes {
    size       = 100
    volumetype = "SSD"
  }

  storage {
    selectors {
      name              = "test"
      type              = "evs"
      match_label_size  = "20"
      match_label_count = "1"
    }
    groups {
      name           = "vguser"
      selector_names = ["test"]

      virtual_spaces {
        name        = "test"
        size        = "100%"
        lvm_lv_type = "linear"
        lvm_path    = "/data/elastisearch"
      }
    }
  }
}

resource "sbercloud_evs_volume" "test" {
  name                    = "test"
  volume_type             = "SSD"
  size                    = 20
  multiattach             = true
  availability_zone       = "ru-moscow-1a"
  enterprise_project_id   = "..."
}

To create the manifest, I used the provider documentation: https://github.com/sbercloud-terraform/terraform-provider-sbercloud/blob/master/docs/resources/cce_node_pool.md#node-pool-with-storage-configuration

After running terraform apply, in the console I see that an evs with the name "test" is being created. But when creating cce-node-pool, an error is thrown, although name="test" is specified in storage.selectors.

What is the cause of the error and how to fix it?

HypnoChaka commented 11 months ago

We are working on your request. Need some time to check out our API.

Ccaswell42 commented 11 months ago

Param match_label_size in selectors block must matched with size of data_volumes, in your case it should be 100.

You can try this manifest:

resource "sbercloud_cce_node_pool" "test" {
  cluster_id               = "..."
  name                     = "node-pool"
  os                       = "CentOS 7.6"
  initial_node_count       = 3
  flavor_id                = "c6.xlarge.2"
  scall_enable             = true
  min_node_count           = 3
  max_node_count           = 6
  scale_down_cooldown_time = 100
  priority                 = 1
  type                     = "vm"
 password                 = "..."

  root_volume {
    size       = 40
    volumetype = "SSD"
  }

  data_volumes {
    size       = 100
    volumetype = "SSD"
  }

  data_volumes {
    size       = 20
    volumetype = "SSD"
  }

  storage {
    selectors {
      name              = "test"
      type              = "evs"
      match_label_size  = "20"
      match_label_count = "1"
    }

    groups {
      name           = "vgpaas"
      selector_names = ["test"]
     cce_managed = true

      virtual_spaces {
        name        = "user"
        size        = "80%"
        lvm_lv_type = "linear"
        lvm_path    = "/data/elastisearch"
      }

      virtual_spaces {
        name        = "runtime"
        size        = "10%"
      }
      virtual_spaces {
        name        = "kubernetes"
        size        = "10%"
      }
    }
  }
}
zradeg commented 11 months ago

Param match_label_size in selectors block must matched with size of data_volumes, in your case it should be 100.

You can try this manifest...

Your example worked, cce-node-pool deployed without an error. But there is another problem: three EVS were created, each connected to its own node. I needed one common disk to connect it to all nodes as a persistent storage for statefull services. Is it possible to connect the resource "sbercloud_evs_volume" "test" from my manifest to all nodes?

Ccaswell42 commented 11 months ago

You can use resource sbercloud_compute_volume_attach to attach your evs volume «test» to all nodes.

https://github.com/sbercloud-terraform/terraform-provider-sbercloud/blob/master/docs/resources/compute_volume_attach.md

resource "sbercloud_evs_volume" "test" {
  name                    = "test"
  volume_type             = "SSD"
  size                    = 20
  multiattach             = true
  availability_zone       = "ru-moscow-1a"
  enterprise_project_id   = "..."
}

data "sbercloud_cce_nodes" "node" {
  cluster_id = sbercloud_cce_cluster.infra_cluster.id
}

resource "sbercloud_compute_volume_attach" "attachments" {
  count = 3
  instance_id = element(data.sbercloud_cce_nodes.node.nodes.*.server_id, count.index )
  volume_id   = sbercloud_evs_volume.test.id
}
zradeg commented 11 months ago

You can use resource sbercloud_compute_volume_attach to attach your evs volume «test» to all nodes.

To deploy nodes we use cce_node_pool. Could you please describe how to use the sbercloud_compute_volume_attach resource with a list of nodes from the resource cce_node_pool in the instance_id parameter?