Open lyarwood opened 1 year ago
We should make an effort to keep the Polarion reports, they help us keep track of issues. hopefully it's not much work following the guide on - https://onsi.github.io/ginkgo/MIGRATING_TO_V2#removed-custom-reporters
In theory we need to swap RunSpecsWithDefaultAndCustomReporters
and RunSpecsWithCustomReporters
with RunSpecs
, and the CI should use the needed flag.
I don't think it's that easy as there's also the custom PolarionReporter
to consider that means you would need to...
If you've written your own custom reporter, [add a ReportAfterSuite node]
(https://onsi.github.io/ginkgo/MIGRATING_TO_V2#generating-custom-reports-when-a-test-suite-completes) and process
the types.Report that it provides you. If you'd like to continue using your custom reporter you can simply call
reporters.ReportViaDeprecatedReporter(reporter, report) in ReportAfterSuite - though we recommend actually changing
your code's logic to use the types.Report object directly as reporters.ReportViaDeprecatedReporter will be removed in a
future release of Ginkgo 2.X. Unlike 1.X custom reporters which are called concurrently by independent parallel
processes when running in parallel, ReportAFterSuite is called exactly once per suite and is guaranteed to have
aggregated information from all parallel processes.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale /remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
/remove-lifecycle stale
/kind bug
What happened:
The following output is after applying https://github.com/kubevirt/ssp-operator/pull/505 but has been around since we moved to v2 a while ago:
The code in question being:
https://github.com/kubevirt/ssp-operator/blob/9afb074a1c07980683839e8f31b60a87fbea05ee/tests/tests_suite_test.go#L567-L575
Hopefully we can just remove this but I thought I'd write it up as a bug first.
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment:
virtctl version
):kubectl version
):uname -a
):