Closed ramineni closed 2 years ago
/cc @gman0
manila sanity tests - https://github.com/kubernetes/test-infra/pull/23520
test-infra: https://github.com/kubernetes/test-infra/pull/23878 manila e2e tests: https://github.com/kubernetes/cloud-provider-openstack/pull/1656
After discussion with @ramineni, she pointed out that CSI e2e tests actually should run in a multi-node environment in order for a CSI driver to be eligible for GA graduation.
I've pushed two PRs that I believed were good to have for manila-csi e2e:
They were supposed to give CI job authors the ability to spin a single-node Kubernetes cluster, when OpenStack VMs / multi-node setup isn't desired. This would save some time installing the respective OpenStack services, making the overall time for job runs shorter.
In order to satisfy GA requirements though, we've agreed to use a multi-node node environment for manila-csi e2e tests as well. For this reason I'll close the PRs mentioned above for now.
We also considered a possibility to have two tests for Manila: one with a single-node setup for regular PRs, and a second one with multi-node setup that would run e.g. when a new CPO release is tagged, or when deemed necessary. We may turn to this option later, e.g. depending on how much time it takes for the tests to run.
After discussion with @ramineni, she pointed out that CSI e2e tests actually should run in a multi-node environment in order for a CSI driver to be eligible for GA graduation.
Re-wording my intent here again , IMO its always recommended to use to production env setup in CI instead of going with local cluster if it can be done . As discussed with @gman0 I understand that there is no as such issues/bottle neck to use multi node , so I suggest we should go with that setup instead.
@gman0 we don't maintain status as such for our plugins (alpha, beta, GA) AFAIK . This is one of the criteria we have come across while driving CSI migration , that's one of the reason we have started migrating the jobs to multi node setup thereafter. So I suggest not to go back again to use local-up-cluster.sh or other unless its a necessity to run the jobs.
But I agree that we dont need to deploy the services that are not required/used for the job. We can update/split the same as required.
We don't have CSI sanity tests currently enabled right?
@gman0 we have them added here , https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/cloud-provider-openstack/provider-openstack-presubmits-release-master-config.yaml#L125 . Are they not getting triggered?
@gman0 And also, do you have plans to enable other e2e test suites for this release. IMO its good to have when you are bumping sidecars , to see no breakage is there.
@ramineni yes of course, I'm just busy with other things at the moment.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I think we can close this? Most of the e2e test suites have been integrated to manila-csi, with the exception of FsGroupChangePolicyTestSuite that I still need to get around to look at. See notes here https://github.com/kubernetes/cloud-provider-openstack/pull/1762#issuecomment-1019997426.
@gman0 +1
/close
@ramineni: Closing this issue.
Is this a BUG REPORT or FEATURE REQUEST?: This issue is to track the progress of manila jobs to be added to test-infra Parent-issue: #1613
What happened:
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment: