onsi / ginkgo

A Modern Testing Framework for Go
http://onsi.github.io/ginkgo/
MIT License
8.37k stars 660 forks source link

Ability for Describe for success and error condition that occur before RunSpecs #1238

Open dschveninger opened 1 year ago

dschveninger commented 1 year ago

We have a design pattern that we would like you to consider as an enhancement for Ginkgo.

We have separated the data collection process, and the assertion process into two different phases. With that design, we can have data loading and data collection issues before the RunSpecs process.

In the test case function of the Ginkgo test suite file, we do some data loading, API calls and validation before we call the RunSpecs function. We do this before the RunSpecs process in order to have data to affect the different dynamic test cases we need to create, using if statements and for loops. In order not to panic before we call RunSpecs we collect the errors in a slice. Then we have a conditional statement at the beginning of every test case DSL file, to short circuit, the creation of any specifications, when there is an initialization error. Then we have a single file that only builds a specification, if an error condition occurs.

The main reason we do this is we want a consistent Ginkgo, json output, describing the execution of the suite even when an error condition occurs. That structured data allows us to have a failure analysis flow that generates action items according to the suites failed test cases. It also allows us to limit the number of failed test cases when data could not be collected or loaded. This allows our failures to describe the issue instead of providing test cases that are failing do to missing data.

With that said, we would like some type of decorator, extension or suggestion to the DSL to allow us to only build specifications whether there are error or if an error does not occur. This will reduce the number of conditional return statements that we have in the sunny day scenario DSL code.

As always, we appreciate the product you’ve created, and the support you provide on questions/issues like this.

Please let me know if I have not described our situation clear enough.

onsi commented 1 year ago

Sorry for the radio silence - I've been pretty behind and my summer schedule doesn't afford me as much time for Ginkgo as I'd like. This is a really interesting usecase and I want to give it some thought. The one quick follow-up question I'd have is whether or not checking the condition explicitly and failing skipping in a BeforeEach would work for you - or if that would pollute your dataset too much?

There isn't a DSL/Decorator in place for this right now - but I'd be open to adding something.

dschveninger commented 1 year ago

No apologies necessary, we appreciate the product you have in the amount of support that you give it.

We do use the skip in IT at run time, after spec build, when bugs in our API have been identified, until they are fixed. Since the API test suite creates skipped test case, it has led leads us to post, processing the Ginkgo json output and making sure that skipped test cases are only open bugs using are standard BlockingWorkItem function. We find that validating if skipped test are not an issue or failure can be difficult and time consuming.

We have another test suite that is a set of yaml declarative test command that a runner framework gathers data and that data is feed into a set of dynamic spec build process that allow us to replicate test cases across resources like multiple VMs and k8s clusters. In the rare case that the runner hangs or data is not collected we would rather have failures at the runner, or resource instead of the 25 or 50 different test command per resource. We are trying to prototype ideas but to date we have not defined a way other to only have one failed test case per resource when the test command did not run or one per test suite when the supporting data cannot be loaded before the spec building process without repeating the same set of if statements all over our file and set of base describes.

I will post a response if we see a way to handle running one test case that is defined in one file and make sure the other 10 or so file do not build any specifications if the data was not collected.