kubernetes-csi / csi-test

CSI test frameworks
Apache License 2.0
151 stars 147 forks source link

CSI sanity test with topology feature gate enabled #505

Closed reenakabra closed 1 month ago

reenakabra commented 6 months ago

While trying to run CSI sanity test on my CSI driver that has topology feature gate enabled, TCs are failing in CreateVolume with error

NodeStageVolume /root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/node.go:530 should fail when no volume capability is provided [It] /root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/node.go:581

Unexpected error:
    <*status.statusError | 0xc000130870>: {
        Code: 3,
        Message: "Invalid topology constraint, more than one preffered topology found",
        Details: nil,
        XXX_NoUnkeyedLiteral: {},
        XXX_unrecognized: nil,
        XXX_sizecache: 0,
    }
    rpc error: code = InvalidArgument desc = Invalid topology constraint, more than one preffered topology found
occurred

/root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/node.go:605

Currently all the nodes where csiNode is running has the topology label set.

When trying to create volume manually through pvc it works fine.

Is there any thing specific we need to do for csi-sanity tests?

Also, we have a specific format for VolumeID. Below error related to volumeID was also reported

Controller Service [Controller Server] DeleteVolume should succeed when an invalid volume id is used /root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/controller.go:817 STEP: reusing connection to CSI driver at /var/lib/kubelet/plugins/org.veritas.infoscale/csi.sock STEP: creating mount and staging directories

• Failure [0.001 seconds] Controller Service [Controller Server] /root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/tests.go:44 DeleteVolume /root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/controller.go:795 should succeed when an invalid volume id is used [It] /root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/controller.go:817

Unexpected error:
    <*status.statusError | 0xc000637270>: {
        Code: 13,
        Message: "VolumeID not in expected format",
        Details: nil,
        XXX_NoUnkeyedLiteral: {},
        XXX_unrecognized: nil,
        XXX_sizecache: 0,
    }
    rpc error: code = Internal desc = VolumeID not in expected format
occurred

/root/dep/src/github.com/kubernetes-csi/csi-test/v3/pkg/sanity/controller.go:826

Can you please suggest how to run cs- sanity test if topology is enabled and specific volumeId is required?

reenakabra commented 6 months ago

Can i please get some input on how to run csi sanity with topology feature enabled.

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 month ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-csi/csi-test/issues/505#issuecomment-2180891143): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.