Open michael-schaller opened 3 years ago
Perhaps we could instead give TestMain a way to get a list of the test functions, and have those functions report whether they succeed or fail. Then TestMain can do whatever it wants.
I don't see how the proposed changes to TestMain
or testing.M
solve the problem. In most cases, especially those cases where there is sufficient output to benefit from a summary, the tests for more than one Go package will be run. There is a TestMain
for each package, so the summary would be printed when a package ends, not when the test run ends. The summary will end up in the middle of the output and is just as hard to find as the original test failure.
If you write the summary out to a file and gather it at the end, you once again need a separate binary, so why not use the existing mechanism (-json
and a tool like gotestsum)?
The crux of this argument seems to be
developers and automation need to diverge from their usual workflow in order to utilize gotestsum
That statement seems a bit exaggerated to me.
First lets look at the developer experience. An external binary does add a one-time step of downloading the binary, that's true, but after that one time the workflow is essentially unchanged. Compare running go test
to running gotestsum
:
$ go test [go test flags] [packages] [test binary flags]
$ gotestsum -- [go test flags] [packages] [test binary flags]
There are optional gotestsum
flags which can be specified as env vars, but otherwise the commands are nearly identical. Any go test
command line can be run using gotestsum
.
Part of the argument seems to be that using another binary doesn't scale to larger teams. Any change to TestMain
or testing.M
would require every package in every project to import some library to print a summary. That seems like a much larger commitment than running go get
once and being able to use the external binary for every package and every project.
Finally there is the question of automation. Many of the largest open source Go projects are already using gotestsum
for CI: Kubernetes, Docker, many projects in the Docker and Kubernetes ecosystems, hashicorp/{terraform,vault,consul,nomad}, ipfs/go-ipfs, facebook/ent, influxdata/influxdb, prometheus/prometheus, etc.
If so many of these large projects are already using an external binary, I would suggest it may not actually be a problem. I suspect these projects are using gotestsum
not only for the summary, but also because:
go test
and go test -v
formats are not ideal for CI. go test
does not print the names of passed or skipped tests. If the environment is not setup correctly you could be unintentionally skipping tests that you thought you were running. go test -v
will give you that information, but essentially makes the log output useless for verbose test suites. The testname
format in gotestsum
is a more appropriate middle ground. You get a single line per passed test, making it easy to visually verify which tests ran and in which order, with full output for failed or skipped tests.So even if this proposal were implemented, it seems unlikely to be a replacement for the solution that already exists.
I have another solution to this problem, that is a little unconventional. If you really don't want to install a different binary, you could use a library to run go test
and print the summary. I suspect that any change to TestMain
would require roughly the same thing. You would need some library to aggregate and format the returned test cases, because presumably you have more than one package with tests.
As an example, you could create a file in the project at test/main.go
with contents:
package main
import (
"fmt"
"os"
"gotest.tools/gotestsum/cmd"
)
func main() {
name, args := os.Args[0], os.Args[1:]
args = append([]string{"-f=testname", "--"}, args...)
if err := cmd.Run(name, args); err != nil {
fmt.Fprintln(os.Stderr, "ERROR ", err.Error())
os.Exit(1)
}
}
To run tests with a summary:
$ go run ./test [go test flags]
Perhaps we could instead give TestMain a way to get a list of the test functions, and have those functions report whether they succeed or fail. Then TestMain can do whatever it wants.
If we also provided it a list of examples and benchmarks, it could run those as well, controlling parallelism and setting b.N = 1, subsuming #41824.
@dnephin
Please don't get me wrong on gotestsum
. I think gotestsum
is very much useful in larger scenarios as you rightfully mention in your first comment. So IMHO gotestsum
isn't the issue but rather that it has to use -json
to get the data from the test suite instead of the test suite providing that data in an easier accessible form.
I partially agree with your second comment but I wouldn't run go test
. Once TestMain
has access to that data one could directly import the summary-printer of gotestsum
... ;-)
I would like to have access to the granular test results programmatically as well. I would like collect per-test telemetry for every invocation. Since I can't always control the invocation to use -json
, I would need to insert that telemetry collection into the code itself.
Looking at the testing package code, I think all the information I would need is readily available on the common struct. This could be exposed on testing.M
as func (m *M) Results() []TestResult
that could be something like:
type TestResult interface {
// Test function name
Name() string
// Test, Benchmark, Fuzz
Type() string
// Enclosing package
Package() string
// Start time of the Test execution
Start() time.Time
// Execution duration
Duration() time.Duration
// Passed, Failed, Skipped
Result() string
// any output printed during the test
Output() []string
// Subtest results, empty slice if none
SubTests() []TestResult
}
I think this would provide the same data as the original proposal, but allow for more extensibility. I think something like this is what @ianlancetaylor was suggesting as well.
Background
There have been several requests in the past for a summary in the end of a test suite run:
27755 - proposal: cmd/go: go test summary
30507 - cmd/go: add summary at end of go test results
41748 - proposal: testing: re-print failed test names before exiting
All these requests have been denied as it isn't clear which summary style/format would clearly be beneficial.
The proposed workaround was to run the tests with
-json
and to use the output to generate a suitable summary. For instance gotestsum provides just that but it is an external tool and so developers and automation need to diverge from their usual workflow in order to utilizegotestsum
.Proposal
Add test stats to
testing.M
so thatTestMain
can print a custom test results summary afterm.Run()
.The test stats should include:
This could be done by enhancing the
runTests
,runExamples
andrunBenchmarks
functions to produce these lists. All lists would be of type[]InternalTest
,[]InternalExample
or[]InternalBenchmark
.