golang / go

The Go programming language
https://go.dev
BSD 3-Clause "New" or "Revised" License
121.2k stars 17.37k forks source link

cmd/cover: extend coverage testing to include applications #51430

Open thanm opened 2 years ago

thanm commented 2 years ago

Proposal: extend code coverage testing to include applications

Author(s): Than McIntosh

Last updated: 2022-03-02

Detailed design document: markdown, CL 388857

Abstract

This document contains a proposal for improving/revamping the system used in Go for code coverage testing.

Background

Current support for coverage testing

The Go toolchain currently includes support for collecting and reporting coverage data for Golang unit tests; this facility is made available via the "go test -cover" and "go tool cover" commands.

The current workflow for collecting coverage data is baked into "go test" command; the assumption is that the source code of interest is a Go package or set of packages with associated tests.

To request coverage data for a package test run, a user can invoke the test(s) via:

  go test -coverprofile=<filename> [package target(s)]

This command will build the specified packages with coverage instrumentation, execute the package tests, and write an output file to "filename" with the coverage results of the run.

The resulting output file can be viewed/examined using commands such as

  go tool cover -func=<covdatafile>
  go tool cover -html=<covdatafile>

Under the hood, the implementation works by source rewriting: when "go test" is building the specified set of package tests, it runs each package source file of interest through a source-to-source translation tool that produces an instrumented/augmented equivalent, with instrumentation that records which portions of the code execute as the test runs.

A function such as

  func ABC(x int) {
    if x < 0 {
      bar()
    }
  }

is rewritten to something like

  func ABC(x int) {GoCover_0_343662613637653164643337.Count[9] = 1;
    if x < 0 {GoCover_0_343662613637653164643337.Count[10] = 1;
      bar()
    }
  }

where "GoCover_0_343662613637653164643337" is a tool-generated structure with execution counters and source position information.

The "go test" command also emits boilerplate code into the generated "_testmain.go" to register each instrumented source file and unpack the coverage data structures into something that can be easily accessed at runtime. Finally, the modified "_testmain.go" has code to call runtime routines that emit the coverage output file when the test completes.

Strengths and weaknesses of what we currently provide

The current implementation is simple and easy to use, and provides a good user experience for the use case of collecting coverage data for package unit tests. Since "go test" is performing both the build and the invocation/execution of the test, it can provide a nice seamless "single command" user experience.

A key weakness of the current implementation is that it does not scale well-- it is difficult or impossible to gather coverage data for applications as opposed to collections of packages, and for testing scenarios involving multiple runs/executions.

For example, consider a medium-sized application such as the Go compiler ("gc"). While the various packages in the compiler source tree have unit tests, and one can use "go test" to obtain coverage data for those tests, the unit tests by themselves only exercise a small fraction of the code paths in the compiler that one would get from actually running the compiler binary itself on a large collection of Go source files.

For such applications, one would like to build a coverage-instrumented copy of the entire application ("gc"), then run that instrumented application over many inputs (say, all the Go source files compiled as part of a "make.bash" run for multiple GOARCH values), producing a collection of coverage data output files, and finally merge together the results to produce a report or provide a visualization.

Many folks in the Golang community have run into this problem; there are large numbers of blog posts and other pages describing the issue, and recommending workarounds (or providing add-on tools that help); doing a web search for "golang integration code coverage" will turn up many pages of links.

An additional weakness in the current Go toolchain offering relates to the way in which coverage data is presented to the user from the "go tool cover") commands. The reports produced are "flat" and not hierarchical (e.g. a flat list of functions, or a flat list of source files within the instrumented packages). This way of structuring a report works well when the number of instrumented packages is small, but becomes less attractive if there are hundreds or thousands of source files being instrumented. For larger applications, it would make sense to create reports with a more hierarchical structure: first a summary by module, then package within module, then source file within package, and so on.

Finally, there are a number of long-standing problems that arise due to the use of source-to-source rewriting used by cmd/cover and the go command, including

#23883 "cmd/go: -coverpkg=all gives different coverage value when run on a package list vs ./..."

#23910 "cmd/go: -coverpkg packages imported by all tests, even ones that otherwise do not use it"

#27336 "cmd/go: test coverpkg panics when defining the same flag in multiple packages"

Most of these problems arise because of the introduction of additional imports in the _testmain.go shim created by the Go command when carrying out a coverage test run in combination with the "-coverpkg" option.

Proposed changes

Building for coverage

While the existing "go test" based coverage workflow will continue to be supported, the proposal is to add coverage as a new build mode for "go build". In the same way that users can build a race-detector instrumented executable using "go build -race", it will be possible to build a coverage-instrumented executable using "go build -cover".

To support this goal, the plan will be to migrate the support for coverage instrumentation into the compiler, moving away from the source-to-source translation approach.

Running instrumented applications

Applications are deployed and run in many different ways, ranging from very simple (direct invocation of a single executable) to very complex (e.g. gangs of cooperating processes involving multiple distinct executables). To allow for more complex execution/invocation scenarios, it doesn't make sense to try to serialize updates to a single coverage output data file during the run, since this would require introducing synchronization or some other mechanism to ensure mutually exclusive access.

For non-test applications built for coverage, users will instead select an output directory as opposed to a single file; each run of the instrumented executable will emit data files within that directory. Example:

$ go build -o myapp.exe -cover ...
$ mkdir /tmp/mycovdata
$ export GOCOVERDIR=/tmp/mycovdata
$ <run test suite, resulting in multiple invocations of myapp.exe>
$ go tool cover -html=/tmp/mycovdata
$

For coverage runs in the context of "go test", the default will continue to be emitting a single named output file when the test is run.

File names within the output directory will be chosen at runtime so as to minimize the possibility of collisions, e.g. possibly something to the effect of

  covdata.<metafilehash>.<processid>.<nanotimevalue>.out

When invoked for reporting, the coverage tool itself will test its input argument to see whether it is a file or a directory; in the latter case, it will read and process all of the files in the specified directory.

Programs that call os.Exit(), or never terminate

With the current coverage tooling, if a Go unit test invokes os.Exit() passing a non-zero exit status, the instrumented test binary will terminate immediately without writing an output data file. If a test invokes os.Exit() passing a zero exit status, this will result in a panic.

For unit tests, this is perfectly acceptable-- people writing tests generally have no incentive or need to call os.Exit, it simply would not add anything in terms of test functionality. Real applications routinely finish by calling os.Exit, however, including cases where a non-zero exit status is reported. Integration test suites nearly always include tests that ensure an application fails properly (e.g. returns with non-zero exit status) if the application encounters an invalid input. The Go project's all.bash test suite has many of these sorts of tests, including test cases that are expected to cause compiler or linker errors (and to ensure that the proper error paths in the tool are covered).

To support collecting coverage data from such programs, the Go runtime will need to be extended to detect os.Exit calls from instrumented programs and ensure (in some form) that coverage data is written out before the program terminates. This could be accomplished either by introducing new hooks into the os.Exit code, or possibly by opening and mmap'ing the coverage output file earlier in the run, then letting writes to counter variables go directly to an mmap'd region, which would eliminated the need to close the file on exit (credit to Austin for this idea).

To handle server programs (which in many cases run forever and may not call exit), APIs will be provided for writing out a coverage profile under user control, e.g. something along the lines of

  import "<someOfficialPath>/cover"

  var *coverageoutdir flag.String(...)

  func server() {
    ...
    if *coverageoutdir != "" {
        f, err := cover.OpenCoverageOutputFile(...)
        if err != nil {
            log.Fatal("...")
       }
    }
    for {
      ...
      if <received signal to emit coverage data> {
        err := f.Emit()
        if err != nil {
            log.Fatalf("error %v emitting ...", err)
        }
      }
    }

In addition to OpenCoverageOutputFile() and Emit() as above, an Emit() function will be provided that accepts an io.Writer (to allow coverage profiles to be written to a network connection or pipe, in case writing to a file is not possible).

Coverage and modules

Most modern Go programs make extensive use of dependent third-party packages; with the advent of Go modules, we now have systems in place to explicitly identify and track these dependencies.

When application writers add a third-party dependency, in most cases the authors will not be interested in having that dependency's code count towards the "percent of lines covered" metric for their application (there will definitely be exceptions to this rule, but it should hold in most cases).

It makes sense to leverage information from the Go module system when collecting code coverage data. Within the context of the module system, a given package feeding into the build of an application will have one of the three following dispositions (relative to the main module):

With this in mind, the proposal when building an application for coverage will be to instrument every package that feeds into the build, but record the disposition for each package (as above), then allow the user to select the proper granularity or treatment of dependencies when viewing or reporting.

As an example, consider the Delve debugger (a Go application). One entry in the Delve V1.8 go.mod file is:

    github.com/cosiner/argv v0.1.0

This package ("argv") has about 500 lines of Go code and a couple dozen Go functions; Delve uses only a single exported function. For a developer trying to generate a coverage report for Delve, it seems unlikely that they would want to include "argv" as part of the coverage statistics (percent lines/functions executed), given the secondary and very modest role that the dependency plays.

On the other hand, it's possible to imagine scenarios in which a specific dependency plays an integral or important role for a given application, meaning that a developer might want to include the package in the applications coverage statistics.

Merging coverage data output files

As part of this work the proposal to enhance the "go tool cover" command to provide a profile merging facility, so that collection of coverage data files (emitted from multiple runs of an instrumented executable) can be merged into a single summary output file. Example usage:

  $ go tool cover -merge -coveragedir=/tmp/mycovdata -o finalprofile.out
  $

The merge tool will be capable of writing files in the existing (legacy) coverage output file format, if requested by the user.

In addition to a "merge" facility, it may also be interesting to support other operations such as intersect and subtract (more on this later).

Differential coverage

When fixing a bug in an application, it is common practice to add a new unit test in addition to the code change that comprises the actual fix. When using code coverage, users may want to learn how many of the changed lines in their code are actually covered when the new test runs.

Assuming we have a set of N coverage data output files (corresponding to those generated when running the existing set of tests for a package) and a new coverage data file generated from a new testpoint, it would be useful to provide a tool to "subtract" out the coverage information from the first set from the second file. This would leave just the set of new lines / regions that the new test causes to be covered above and beyond what is already there.

This feature (profile subtraction) would make it much easier to write tooling that would provide feedback to developers on whether newly written unit tests are covering new code in the way that the developer intended.

Design details

Please see the design document for details on proposed changes to the compiler, etc.

Implementation timetable

Plan is for thanm@ to implement this in go 1.19 time frame.

Prerequisite Changes

N/A

Preliminary Results

No data available yet.

gopherbot commented 2 years ago

Change https://go.dev/cl/388857 mentions this issue: proposal: design document for redesigned code coverage

mvdan commented 2 years ago

https://github.com/golang/go/issues/30306 is also likely relevant :)

qmuntal commented 2 years ago

In the past I've had the need to merge coverage profiles from the same application executed in different OSs, therefore different binaries, in order to report a single coverage metric and have an OS-agnostic report.

I see no explicit reference to this use case in the design, would it be covered? (pun intended)

thanm commented 2 years ago

Merging coverage profiles produced in different GOOS/GOARCH environments: yes, this will absolutely be supported.

One of the interesting (and IMHO slightly weird) aspects of the current coverage system is that its definition of "all source code in the package" is limited by what's picked up by the build tags in effect for the "go test" run.

So it's possible to do "go test -cover" and see "100%" statement coverage on linux, even if there may be a 500-line function in a file foo_windows.go in the package (with build tags to select only for GOOS=windows) that is effectively invisible to the coverage tooling.

stapelberg commented 2 years ago

This sounds great! Can’t wait to try this out! :)

bcmills commented 2 years ago

(#31007 is also somewhat related.)

ChrisHines commented 2 years ago

I really like the direction this proposal would take code coverage for Go. It looks like this proposal would lay the foundation to bring Go's code coverage story to the next level. From a developer's perspective I really liked the capabilities of the EMMA tool for Java code coverage I used to use years ago. I felt I had lost something with the code coverage tools available to Go when it became my main programming language roughly ten years ago. Consider taking a look at it as prior art if you haven't already.

In particular I would like to advocate for the "Intra-line coverage" feature mentioned in the "Possible extensions" section of the proposal. That was a feature of EMMA that I got a lot of value out of when I used it (EMMA calls it fractional line coverage).

But also, the ability to measure code coverage for a whole application and gather coverage data from integration tests are two use cases my current project at work needs a solution for. This was a discussion that came up just last week, so the timeliness of this proposal is amazing.

adonovan commented 2 years ago

Nice idea. This would bring the designs for coverage used by the go build tool and Blaze/Bazel into convergence. (The latter already implements coverage as a build mode, and supports whole-program coverage.) Moving the instrumentation step from a source-based transformation to a compiler pass should also simplify both go and Bazel, and may permit finer-grained coverage of short-circuit expressions such as a || b && c than line granularity.

rsc commented 2 years ago

This proposal has been added to the active column of the proposals project and will now be reviewed at the weekly proposal review meetings. — rsc for the proposal review group

ianlancetaylor commented 2 years ago

CC @robpike

robpike commented 2 years ago

While I understand the weaknesses of the current approach as well as anyone, moving the heavy lifting into the compiler seems like a mistake to me. There are advantages to the extant decoupling, and the ease of maintenance of the tool and tweaks to the build chain.

Wouldn't it be simpler and more flexible to keep the source-to-source translator and just work on go build? The problems of compilation and coverage can be kept separate, with all the advantages of simplicity and portability that result.

thanm commented 2 years ago

Wouldn't it be simpler and more flexible to keep the source-to-source translator and just work on go build?

This is a fair question, and something I've been thinking about a good bit lately.

My initial instinct was to use an entirely compiler-based approach because it seemed to me to offer the most control/freedom in terms of the implementation, but also (to be honest) because the compiler + linker are where I have the most expertise, e.g. my "comfort zone".

I've written CLs to do things in the compiler, and this approach seems to work well overall there are definitely some headaches.

One headache is that if we move things into the main GC compiler, we also need something that will work with the other compilers (gccgo, gollvm).

A second problem is that the compiler at the moment doesn't capture all of the source position information that you need. For example, consider this function:

func addStr(x, y string) string {   // line 19
    return x + y            // line 20
}                   // line 21

Here's the AST dump from the compiler before the 'walk' phase:

before walk addStr
.   RETURN tc(1) # p.go:20:2
.   RETURN-Results
.   .   ADDSTR esc(h) string tc(1) # p.go:20:11
.   .   ADDSTR-List
.   .   .   NAME-p.x esc(no) Class:PPARAM Offset:0 string tc(1) # p.go:19:13
.   .   .   NAME-p.y esc(no) Class:PPARAM Offset:0 string tc(1) # p.go:19:16

The compiler doesn't capture any source position info for the open and close brackets on lines 19 and 21 (why bother, we don't need them for debugging), and if you look at the references to "x" and "y" on line 20, their source posisitions correspond to their definitions, not their uses.

This doesn't create any issues when reporting coverage basic metrics (e.g. percent of statements covered) but it's a real problem when generating HTML reports, since you want to be able to "paint" chunks of the code red or green (depending on whether they executed or not)-- you can't paint it correctly if you didn't capture any source position info for it.

Of course, I could change the compiler to work in a mode where it captures/records more of this info (if building with "-cover"), but the risk there is that it might slow down the "fast path" for compilation even when the extra info collection is turned off.

Given that I've already written a compiler-based implementation, I think what I am going to do now is try prototyping a source-to-source based alternative as well (but moving to emitting the new output format). That might be a better middle ground.

ChrisHines commented 2 years ago

Is there any synergy between this proposal and the new go test -fuzz feature? Could fuzz testing reuse the coverage tooling from this proposal to any benefit?

thanm commented 2 years ago

Is there any synergy between this proposal and the new go test -fuzz feature? Could fuzz testing reuse the coverage tooling from this proposal to any benefit?

This is a reasonable question (I might add that this also came up previously during an internal design review).

Although coverage testing and fuzzing both incorporate some notion of "coverage data", the two things are sufficiently different in terms of how they use the data means that it probably isn't worth trying to share implementations.

In both cases the compiler (or tool) is adding instrumentation code (counter increment or modification) on control flow edges, but beyond that things diverge in a big way.

For fuzzing there is no need to capture source position information for counters at all (there would be no point), and the values of the coverage counters are are used only internally / on-line within the test, never written out or stored. The only thing the fuzzer wants to know is whether coverage "changed" as a result of a mutation (AIUI).

For coverage testing on the other hand, source position info is critical (it's arguably the key piece), and unlike fuzzing we don't just want to determine that coverage has changed, we need to be able to write it out and report on it.

So with that in mind, my guess is that there isn't going to be a lot of useful overlap. Which I think is probably OK -- the existing fuzzer support for coverage instrumentation is pretty simple, e.g.

https://go.googlesource.com/go/+/f1dce319ffd9d3663f522141abfb9c1ec9d92e04/src/cmd/compile/internal/walk/order.go#444

komuw commented 2 years ago

problem:

The above is a situation I always find myself in and something I had hoped go tool cover would help with.
I don't know if this proposal will make it possible to answer the question; what testcase/s cover func Bar?, but I'm sharing my usecase just in case.

(edit) my current workaround is usually;

func Bar(){
+   debug.PrintStack()
}

and then I run the whole test suite with -v and figure out the testcases that cover Bar based on the stacktrace.

thanm commented 2 years ago

which particular testcase/s covers the function that I'm about to modify?

This is covered in the detailed design document in this section:

https://go.googlesource.com/proposal/+/master/design/51430-revamp-code-coverage.md#tagging-coverage-profiles-to-support-test-origin-queries

I agree that having tools to answer these sorts of queries would be really valuable.

When we were rewriting the Go linker during the 1.15/1.16 time frame it seemed as though we were running into this situation on a daily basis (the linker has many "dark corners" that are only executed with obscure inputs and build modes, leading to many questions of the form "What test do I need to run in order to trigger the execution of this function?").

rsc commented 2 years ago

The discussion seems to have trailed off. Is there anything left to discuss? Does anyone object to this plan?

adonovan commented 2 years ago

Rob's point about compiler complexity/portability is valid, and Than's "middle ground"---retaining source-to-source translation but standardizing the build and runtime interfaces of coverage---sounds like a good compromise.

rsc commented 2 years ago

I am not sure we want to end up in a world where there are two different coverage mechanisms we have to maintain, so I am not sure about the middle ground of keeping both - what would use the old source-to-source translator, and why would we maintain it?

LLVM and gccgo do not support cmd/cover right now; they just use the coverage built in to those compilers. (It might be nice to have some kind of adapter to generate the files that cmd/cover builds nice HTML displays from though.)

Is there anything more we need to know from the implementation side in order to decide here? That is, do we want to wait for any more CLs?

adonovan commented 2 years ago

My understanding of Than's "middle ground" approach is that he intended to proceed with the changes to the run-time interface (the Go packages used within the target process, and the contract between those packages and the build tool regarding where files are written), but to back away from compiler-based instrumentation and keep the existing source-to-source translation algorithm.

rsc commented 2 years ago

@thanm, what do you think the status of this proposal is? Are there any details in flux that we still need to work out? And can you confirm @adonovan's comment that the plan is to stick with source-to-source and not do compiler instrumentation?

thanm commented 2 years ago

what do you think the status of this proposal is? Are there any details in flux that we still need to work out?

hi, thanks for the "ping".

In terms of the design, I think things are mostly settled overall. There are a couple smallish items relating to command line API that need hashing out; I am in the process of adding more details in these areas to the design doc, and when I am done I will post an update here on the issue.

In terms of the implementation, I have "go build -cover" working and all.bash passing with the new stuff turned on (new design does everything that the old one did). I have not actually landed any of my CL stack yet however.

The test "origin" feature and the intra-line coverage featureare not yet implemented; I think at this point (given that the release freeze in in four weeks) I'll need to postpone features until after Go 1.19.

can you confirm @adonovan's comment that the plan is to stick with source-to-source and not do compiler instrumentation?

Confirmed, there will still be source-to-source rewriting, then there will be a small amount of additional "special sauce" done by the compiler when building the rewritten source. Doing things this way (IMO) provides the best overall solution.

Thanks.

robpike commented 2 years ago

@thanm Please make sure that the source-to-source translator can still be used by other compiler suites.

thanm commented 2 years ago

@thanm Please make sure that the source-to-source translator can still be used by other compiler suites.

Yes, that is definitely "in plan".

rsc commented 2 years ago

Based on the discussion above, this proposal seems like a likely accept. — rsc for the proposal review group

gopherbot commented 2 years ago

Change https://go.dev/cl/404414 mentions this issue: proposal: updates to code coverage revamp design document

thanm commented 2 years ago

FYI: progress on my implementation CLs is moving along, but at this point in the game (less than 1 week to go before release freeze) it seems unlikely that this feature will be included in 1.19-- wanted to share that with folks who are following along on this issue. Slipping to Go 1.20 is probably a good thing in the long run; will allow for more thorough review of the trickier parts of the implementation.

I've also posted an update to the detailed design to bring it into alignment with my current implementation (this is primarily to reflect the fact that the new scheme will use a mix of source-to-source rewriting and compiler support).

I've also tagged my implementation CLs with this issue (now that they have stabilized for the most part).

gopherbot commented 2 years ago

Change https://go.dev/cl/357609 mentions this issue: cmd: support reading coverage counter data files

gopherbot commented 2 years ago

Change https://go.dev/cl/395895 mentions this issue: cmd/compile: add coverage fixup mode

gopherbot commented 2 years ago

Change https://go.dev/cl/359403 mentions this issue: internal/coverage: add apis for reading/writing counter data

gopherbot commented 2 years ago

Change https://go.dev/cl/354790 mentions this issue: runtime: add an exit hook facility

gopherbot commented 2 years ago

Change https://go.dev/cl/401235 mentions this issue: cmd/compile,cmd/link: hooks for identifying coverage counters

gopherbot commented 2 years ago

Change https://go.dev/cl/401236 mentions this issue: runtime/coverage: apis to emit counter data under user control

gopherbot commented 2 years ago

Change https://go.dev/cl/395896 mentions this issue: cmd/cover: add hybrid instrumentation mode

gopherbot commented 2 years ago

Change https://go.dev/cl/402174 mentions this issue: cmd/go: add hook to check for GOEXPERIMENT in script tests

gopherbot commented 2 years ago

Change https://go.dev/cl/401234 mentions this issue: runtime: add hook to register coverage-instrumented packages

gopherbot commented 2 years ago

Change https://go.dev/cl/395898 mentions this issue: internal/buildcfg: turn on GOEXPERIMENT=coverageredesign by default

gopherbot commented 2 years ago

Change https://go.dev/cl/355451 mentions this issue: runtime/coverage: runtime routines to emit coverage data

gopherbot commented 2 years ago

Change https://go.dev/cl/353453 mentions this issue: internal/coverage: add coverage meta-data encoder

gopherbot commented 2 years ago

Change https://go.dev/cl/395894 mentions this issue: cmd: add a new goexperiment for redesigned code coverage

gopherbot commented 2 years ago

Change https://go.dev/cl/353454 mentions this issue: internal/coverage: add coverage meta-data decoder

gopherbot commented 2 years ago

Change https://go.dev/cl/355452 mentions this issue: cmd/go: support new hybrid coverage instrumentation

gopherbot commented 2 years ago

Change https://go.dev/cl/404299 mentions this issue: cmd/cover,cmd/go: better coverage support for tests that build tools

rsc commented 2 years ago

No change in consensus, so accepted. 🎉 This issue now tracks the work of implementing the proposal. — rsc for the proposal review group

gopherbot commented 2 years ago

Change https://go.dev/cl/409994 mentions this issue: proposal: more updates to code coverage revamp design document

gopherbot commented 1 year ago

Change https://go.dev/cl/432757 mentions this issue: cmd: relocate search.MatchPattern to cmd/internal/pkgpattern

thediveo commented 1 year ago

Would this new design/revamp also support coverage of programs that re-execute themselves? On Linux, applications and packages that deal with Linux namespaces to some extend do re-executions in order to switch into namespaces that require the process to be single-threaded at the time of switching. Docker does re-execution, podman I've seen, other container engines too...

In the past, I've developed a workaround https://github.com/thediveo/gons#gonsreexectesting that wraps a testing.M in order to write multiple cov files for re-execution and then aggregating them. I would be a happy gopher to be able to retire this ugly hack!

thanm commented 1 year ago

@thediveo yes you should be able to do away with these sorts of wrappers entirely.

The new usage model will be

  $ go build -cover -o myapp.exe ...
  $ mkdir /path/to/coverageprofiledir
  $ GOCOVERDIR=/path/to/coverageprofiledir ./myapp.exe ...
  <fork, re-exec to whatever degree wanted>
  $ go tool covdata textfmt -i=/path/to/coverageprofiledir -o output.cov.txt
  $

The GOCOVERDIR=... "run" step above can be a simple execution, or a script that does repeated runs of the application, or whatever other scenario you need as part of your integration test.

gopherbot commented 1 year ago

Change https://go.dev/cl/435335 mentions this issue: cmd/link: fix coverage counter issue on AIX

gopherbot commented 1 year ago

Change https://go.dev/cl/436675 mentions this issue: cmd/{cover,go}: avoid use of os.PathListSeparator in cmd/cover flag