Open crandles opened 1 year ago
@crandles https://github.com/kubernetes-sigs/e2e-framework/issues/133 We kind of sort of dig into the idea of running Benchmark workflows. But didn't manage to get it ahead. I would love to see that being picked up again. it would be a wonderful addition
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
This may still be something we need to check out cc @crandles @harshanarayana
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I like the idea of offering a light way of launching feature tests.
Started to look into this again:
RunTest
helper that called the same underlying helper functions as Test
, so that the before/after helpers ran -- and this works -- but without further code modifications, the feature/assesment concepts still leak through test names (if feature name is empty, it is rendered as Feature-1
, assessment as Assessment-1
, etc)testing.T
is used throughout the current set of functions, and we need testing.B
@crandles this sounds awesome. Yes, this framework did not have benchmark in mind, but would be a great addition. This is exciting, can't wait to see how this turns out.
Support the execution of tests without requiring use of the features package.
To the Environment interface, I propose the addition of two new functions:
I am unsure what if anything makes sense as return values, t.Run returns a bool which may fit here; additionally
RunBenchmark
might deserve its own issue, given the current lack of support overall for benchmarks.Note, there is some overlap in wording with this existing function, unsure if the above naming will be confusing:
The goal is to allow a more stream-lined path to writing go tests natively or otherwise with the framework of choice; don't force usage of the features package.
Potential usage: