Open verult opened 4 years ago
The tests might be just creating a copy of the StorageClass config. But for this specific test case, is allowVolumeExpansion
set somewhere else in the test? Otherwise resize couldn't have worked
/cc @saikat-royc @mattcary
Hmm, it may be created in k/k/test/e2e/storage/drivers/csi.go:GetDynamicProvisionStorageClass which means it's always using pd-standard?
that would need to be corrected if so. Good find, thanks!
Aside from that, it might be good to include allowVolumeExpansion
in the configured storageclasses
Hmm, it may be created in k/k/test/e2e/storage/drivers/csi.go:GetDynamicProvisionStorageClass which means it's always using pd-standard?
that would need to be corrected if so. Good find, thanks!
It should be created from https://github.com/kubernetes/kubernetes/blob/2ad48d384dc3ac85f25edc365c06fd83b2dfddb0/test/e2e/storage/external/external.go#L264 which is the storageclass we pass into the test.
oh phew that would be much better.
Ah and allowVolumExpansion
is explicitly set for "AllowExpansion" test patterns
and
This should be working as expected then
/reopen I think there is a minor bug in that logic though. It is only overriding the storageclass.AllowVolumeExpansion field when the testpattern is set to true. I think it needs to override it in both cases, so that it can properly handle storageclasses that have it set.
@msau42: Reopened this issue.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/lifecycle frozen
On Thu, Jun 10, 2021 at 7:55 PM fejta-bot @.***> wrote:
Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community https://github.com/kubernetes/community. /lifecycle rotten
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/issues/622#issuecomment-859224484, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIJCBAE5A47F5F5JYOJ44WDTSF3I3ANCNFSM4SGNWVOA .
Noticed that some test jobs are creating standalone StorageClasses, such as this resize test:
https://prow-gob.gcpnode.com/view/gcs/gob-prow/logs/periodic-gcp-pd-csi-driver-master-latest-k8s-integration-1-16-gke/1313498478982205440#1:build-log.txt%3A8874
Should all tests use only the configured StorageClasses?
The StorageClasses configured in k8s-integration also don't have allowVolumeExpansion set