Open dsymonds opened 6 years ago
If a test opens a file that is not under GOPATH or GOROOT, then its result will not be cached. Does that seem sufficient?
As in, I could open /dev/null
(OS-specific, alas) to signal that? Is that documented anywhere?
No, wait, sorry, I got it wrong. If a test opens a file that is under GOPATH or GOROOT, then if that file changes, the cache is ignored. If a test opens a file that is not under GOPATH or GOROOT, that does not affect caching.
Similar problem. Testing executable out of a process, by building it in by github.com/onsi/gomega/gexec and run it against golden files. The test result is cached even if executable sources were changed.
https://tip.golang.org/doc/go1.11#gocache says this:
Go 1.11 will be the last release to support setting the environment variable GOCACHE=off to disable the build cache, introduced in Go 1.10. Starting in Go 1.12, the build cache will be required, as a step toward eliminating $GOPATH/pkg. The module and package loading support described above already require that the build cache be enabled. If you have disabled the build cache to avoid problems you encountered, please file an issue to let us know about them.
This is one such issue. It'd be nice if there was something formally supported by Go 1.11 in advance of a major workaround being removed in Go 1.12.
I included a few suggested approaches in my original message. Who's the decider for this?
I'm not convinced this is a property of test cases. The test is rerun if the binary has changed. If the binary has not changed, then I think it is reasonable to say "this binary passed the last time" and leave it out, especially if you've changed one line in one source file and are running go test all
or go test ./...
to just test everything that line might affect. In that situation, you definitely don't want to run a binary that is completely unchanged just in case the external dependency has changed itself. If we let test cases unconditionally opt out of caching, then that use case is completely gone.
I realize that in some situations you do really want the test to run even if there was a successful run with exactly the same binary in the past. When that's the case, be explicit about it and say -count=1
.
Leaving open to look at again in Go 1.12 cycle but very likely the answer is no.
I don't understand your response. This is only for a small minority of test cases, which would have to opt-in to this. If a test case has said "don't cache my result", then I do want to run that test every time., because that's exactly what the test case has indicated. Now if that's a problem, it's a problem because of the test having external dependencies (or whatever), not because it has opted out of the test caching. The use case of go test ./...
is not at all gone; it makes no sense to say that. It's still there. It just runs an extra test that has opted-in to always being run. And that test has presumably done that knowing full well that it may be run even though the test binary doesn't change.
Using -count=1
is a hack. It's user-specific rather than test-specific, and it breaks the use case that you say you want to preserve: namely it'll bust test caching for go test ./...
for all tests, not just the one-in-a-hundred (or whatever) test that knows it needs to be uncached.
I see this analogous to t.Parallel
that permits a test to signal that there's something special about the test. It wouldn't be used very often, but it carries important information about the test.
I think for people working on dependencies down in the leaves it is critical that running go test all
twice does nothing the second time. If you are the author of a supposedly-uncacheable test, then yes you want it to run anew every time you run it. But other people run your tests too, and my point is that it is not appropriate to force your preference into those use cases.
It would be OK to have a 'mark this as depending on external things' as long as that result was still cached by default, and then we could also add a 'rerun all the tests that depend on external things' as an explicit flag. Not sure what that would look like exactly.
Leaving for Go 1.13.
we could also add a 'rerun all the tests that depend on external things' as an explicit flag
Or cmd/go could automatically re-run such tests in the current module, but not in dependencies.
Or cmd/go could automatically re-run such tests in the current module, but not in dependencies.
Or, more uniformly, it could re-run tests that matched patterns (foo/...
) but not meta-packages (all
). But one of the explicit examples that @rsc gave was ./...
, which is in the current module.
As a workaround, I suppose you could always get the intended effect by using a //go:generate
directive to change a random number in a file:
package external
//go:generate bash -c 'echo -e "package external\nconst S = \"$(head -c 8 /dev/random | base64)\"" > random.go'
package foo_test
import _ "example.com/external"
go generate external && go test ./...
I don't know if this comment here should be considered a distinct issue, or if a solution for "identify tests as uncacheable" would also be the solution for this.
In short, it seems go test all
is currently not nearly as useful in practice as it could be.
Based on your total count of direct/indirect dependencies, the chances start to climb relatively high that at least someone in your chain will have a test that is "expected" to fail unless you follow the steps in some README or set up some external system. go test -short all
has higher chance of succeeding than go test all
, but -short
can skip many useful tests.
Ideally, there would be some way to mark a test as requiring additional setup or "not expected to succeed as part of go test all
", or something like that. Perhaps "mark test as uncachable" would be the answer.
After raising this as a concern, @bcmills pointed me to this issue.
edit: #31310 and #30595 are different than this issue here, but they might at least be partially related depending on approach.
In https://github.com/golang/go/issues/30595#issuecomment-472011263, Ian wrote:
In my opinion, tests that require special handling should define their own flag and should only be run if that flag is passed. If the flag is not passed, they should call t.Skip.
The context there was discussion about separating integration and unit tests.
Perhaps that flag-based approach could be part of the solution here for tests that rely on external dependencies or are otherwise uncachable, as well as to help with the concern expressed in https://github.com/golang/go/issues/23799#issuecomment-469297284 about go test all
not being as nearly as useful in practice as it could be, especially when you get a large number indirect dependencies.
If that is going to be part of the path forward, it would help to get some type of convention established and promoted. It could be tied to the desire to increase the usefulness of go test all
given I think the hope is that people will start more frequently running the tests of their indirect dependencies based on how modules have redefined go test all
to make it more useful.
The further you get away from your direct dependencies, the less likely you are to know about some quirk of how to avoid running tests that rely on external resources or manual setup. It would be nice to make it easier to avoid those, without the solution being "read 50 READMEs".
If the binary has not changed, then I think it is reasonable to say "this binary passed the last time" and leave it out
This may be true in the majority of cases but if the binary’s purpose is to integrate with implicitly-linked dependencies then previous test results just boil down to the code still compiles and worked at some point. By skipping the tests go's test suite is making the assumption, on behalf of the developer running the tests, that the external dependencies didn’t change.
The go toolset already has strong mechanisms that can be used to prevent ‘integration’ tests from running as part of the default suite of tests go test ./…
; for instance build tags:
// +build integration
package myservice_test
func Test123(t *testing.T) {
....
}
Using build tags we can workaround this issues by running tests twice:
This method works but requires extra developer communication on how to properly run a project’s test suite.
Whereas, if the test developer could disable the cache for specific tests:
// +build integration
package myservice_test
func Test123(t *testing.T) {
t.NoCache()
....
}
the go test suite will never make the assumption that implicitly-linked dependencies went unchanged. Running all tests by default would still work as intended; and developers could opt into the integration tests via the build tag without having to use -count=1
(risking clearing the cache on other tests) or having to use -run=...
to purposely clear select test caches.
@dudleycodes, note that the cache key for a test result includes the values of all environment variables read by the test. If the test enables the “integration” tests based on an environment variable, then no special tag is needed (and the compile step will be easier to cache).
@bcmills interesting solution - but it still seems like a workaround as it’d require communicating to downstream developers to set the env var and mutate its contents in between tests. Whereas if we could explicitly selectively disable the cache for specific tests this intent/knowledge would be expressed in the test code with no further human intervention needed.
Still definitely an need for this, any additional thoughts?
@stevenh What is your use case? Thanks.
In our case we have a test which builds and runs a docker container with components from the repo and then runs tests against it.
We hit a case today that local tests were all passing because the result of the test was cached as go didn't realise that it needed to be re-run. It happened to get caught by CI but that was more luck due to the different cache contents.
Being able to either disable test caching for this test or add dependencies manually would allow this case to be handled.
Does that help?
Thanks.
The new test caching stuff is neat, except when a test has an external dependency (e.g. it is testing code that hits a web service), and we don't want the test's result to be cached (so that we're always exercising the code against the real world).
There are ways to disable the test caching from the user's perspective (e.g. passing
-count=1
), but not from the test itself from what I can tell. It'd be nice if tests in this position could do something to indicate to thego
tool that its result and output should not be cached.Some ideas:
*testing.T
that can be invoked to signal this.$GOCACHE/something
).External
substring).