Open vponomaryov opened 3 years ago
PV that was part of deleted host instance must be reused.
If you delete a node with local storage the a new PV must be created on some other node.
I am not sure why there are 5 released PVs though.
we have landed big changes since this was reported (in #534), are you still experiencing the issue?
we have landed big changes since this was reported (in #534), are you still experiencing the issue?
Need to reverify it. The automation part we use for it is skipped in general due to the bug.
Trying to verify it faced another bug: https://github.com/scylladb/scylla-operator/issues/687
this should be unblocked now, please try again
the old PVs could be an issue with the provisioner moving this to 1.5 and we can backport if it proves to be an issue with the operator
this should be unblocked now, please try again
@tnozicka , I reproduced it using latest, for now, operator v1.4.0-alpha.0-87-g4bc9b0c: kubernetes-45cabb2c.tar.gz In this case I used EKS. Number of redundant PVs is number of scylla members recreations:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
local-pv-1121913b 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-1 local-raid-disks 153m Filesystem
local-pv-19b585e4 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-0 local-raid-disks 127m Filesystem
local-pv-19c7224d 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-0 local-raid-disks 33m Filesystem
local-pv-1cfddd87 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-2 local-raid-disks 101m Filesystem
local-pv-30a26c32 3537Gi RWO Delete Bound scylla/data-sct-cluster-us-east1-b-us-east1-0 local-raid-disks 9m5s Filesystem
local-pv-36bd02ed 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-0 local-raid-disks 76m Filesystem
local-pv-4f046023 3537Gi RWO Delete Bound scylla/data-sct-cluster-us-east1-b-us-east1-2 local-raid-disks 16m Filesystem
local-pv-6d7001ab 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-2 local-raid-disks 49m Filesystem
local-pv-86081c4d 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-2 local-raid-disks 172m Filesystem
local-pv-9106241e 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-1 local-raid-disks 165m Filesystem
local-pv-9c0cd8b5 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-2 local-raid-disks 3h43m Filesystem
local-pv-9df96263 3537Gi RWO Delete Bound scylla/data-sct-cluster-us-east1-b-us-east1-1 local-raid-disks 85m Filesystem
local-pv-badf726e 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-0 local-raid-disks 59m Filesystem
local-pv-c67780b 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-1 local-raid-disks 111m Filesystem
local-pv-d3cda7a 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-0 local-raid-disks 137m Filesystem
local-pv-ebb69c04 3537Gi RWO Delete Available local-raid-disks 7m30s Filesystem
local-pv-f545bc86 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-0 local-raid-disks 163m Filesystem
local-pv-f72642c1 3537Gi RWO Delete Released scylla/data-sct-cluster-us-east1-b-us-east1-1 local-raid-disks 3h43m Filesystem
pvc-70b73d04-27df-401d-a8f1-dac079290045 10Gi RWO Delete Bound scylla-manager/data-scylla-manager-manager-dc-manager-rack-0 gp2 3h45m Filesystem
pvc-7d3a34bc-78b1-432b-9980-511262cda52f 10Gi RWO Delete Bound minio/minio gp2 3h45m Filesystem
The Scylla Operator project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
/lifecycle stale
Describe the bug If we delete K8S node and it's host instance then mark pod with replacement label we get new scylla pod which doesn't find existing PV and gets newly created one for first several times. But after several such attempts we get following list of PVs:
And PVC for it:
And failed pod:
To Reproduce Steps to reproduce the behavior:
Expected behavior PV that was part of deleted host instance must be reused. No new PVs must be created.
Config Files If relevant, upload your configuration files here using GitHub, there is no need to upload them to any 3rd party services
Logs kubernetes-c01abadb.tar.gz
Environment:
Additional context Add any other context about the problem here.