Open pohly opened 1 year ago
thanks for adding the issue @pohly I'd like your input on some design questions. I can imagine two approaches:
PreviewSpecs
simply constructs the spec tree and returns a Report
object. It does not honor any configuration and always produces the complete spec list. The order of specs is not well-defined.PreviewSpecs
operates more like --dry-run
: if called without arguments it uses the global Ginkgo config (configured via the cli/flags) to construct, order, and filter the tree. The various randomization flags are honored, as are all the filter flags (note that if a spec is filtered out it still appears in the tree - however it will have SpecStateSkipped
). As with RunSpecs
you can provide custom GinkgoConfiguration()
s to PreviewSpecs
to see the effects of a given configuration on the specs.I haven't considered the implementation yet and it may prove that one of these is much cheaper than the other - but I wanted to discuss the design without that bias. Thoughts?
Can we do both?
Option 2 may be useful for users to quickly try out the effect of the CLI flags. Option 1 can be achieved by not setting any flags, so option 2 is more capable. However, if in that same run one wants to report "x out of y specs would run", then one needs both.
Just curious has there been any further progress on this? I would be interested in this functionality.
In my team we've switched leveraging the ginkgo outline
as a way to see what tests will be run but that is because we've shifted the perspective and treat the outline
output like a BDD Gherkin style "feature scenario" to see what is being tested. Yet this kind of breaks on us since the outline command relies on AST and some of our tests are generated dynamically so sometimes we end up with a big graph of unknown
text. But having an interface to preview specs would help us out a lot.
I'm finally working on this and want to confirm that, just like --dry-run
, PreviewSpecs
will be mutually exclusive with RunSpecs
and will require you to run in series. These constraints could be conceivably relaxed in the futrue (i.e. in a backward compatible way) but in the interest of getting this out - if I can make those simplifying constraints I can ship it sooner. Any concerns?
So you mean a process can invoke either PreviewSpecs
or RunSpecs
, but not both (i.e. first PreviewSpecs
, then RunSpecs
)?
The -list-tests
and -list-labels
that I am implementing in https://github.com/kubernetes/kubernetes/pull/112894 would be okay with that constraint.
But the PR also adds sanity checking of the test registration. I'm still discussing with @aojea
whether a panic when some bad call is invoked (my original approach) or use some more elaborate "collect all errors during registration, report them together" approach (current content of the PR) is better. If I want to do the latter as a prerequisite before running tests, then I would have to do PreviewSpecs
+ "check for errors" + RunSpecs
.
Having said that, shipping it sooner with the constraint and later relaxing it sounds good to me.
Hey @pohly and @Dannyb48 I now have a... preview of PreviewSpecs
up on the master branch. The docs are here PTAL.
I'll take a look at allowing PreviewSpecs
and RunSpecs
to both run.
@pohly what about the constraint to run in series only?
actually - never mind. i think i've found a way around both constraints that isn't too expensive or too ugly. i'll push it to master after I add some tests
alrighty - sorry for all the noise. the latest code is now on master and both constraints are gone. You can call PreviewSpecs
and RunSpecs
in the same invocation of ginkgo
and you can call PreviewSpecs
when running in parallel. Each parallel process will run PreviewSpecs
and get back what should be a basically identical (modulo minor timestamp differences) Report
of the whole suite (i.e. not the subset of specs that the particular process will run - which is not predictable deterministically - and probably not what you want anyway).
Excellent! I'll take a look.
I've now also tested with PreviewSpecs
followed by RunSpecs
. Everything is working as expected, so as far as I am concerned, this is ready for tagging a release.
👍 thanks @pohly - i'll cut a release now
ginkgo --dry-run
can be useful for users to see what specs are defined. But sometimes test suite authors may want to provide other ways of listing specs or derive information about them (e.g. all defined labels). For that, aPreviewSpecs
function that returns a fullReport
would be useful.Less useful alternative: adding a "don't produce any output"
ReportConfig
field, then callingRunSpecs
with aReportAfterSuite
callback. More complicated to set up.Originally discussed in https://gophers.slack.com/archives/CQQ50BBNW/p1686938649240809.