NOTE: Posting this just for preview since checking we have tests running somewhere else that wouldn't be covered by test
When running integration tests, we walk all modules under services module. The actual commande executed has
cd ./internal/repotools/cmd/eachmodule \
&& go run . -p service -c 1 -fail-fast=true -skip="" \
"go test -timeout=10m -tags "integration" -v -run '^TestInteg_' -count 1 ./..."
Running this results in a lot of entries like this
ok github.com/aws/aws-sdk-go-v2/service/accessanalyzer 0.418s [no tests to run]
testing: warning: no tests to run
PASS
Unfortunately, every "miss" that doesn't have any tests spends about half a second to report there are no tests to run. This has the surprising effect that we spend more time walking tests that are no-op than actually executing the tests. For example, on a local run of time make integration
make integration 154.27s user 112.93s system 121% cpu 3:39.90 total
Out of those, summing all no tests to run counts to 150.5 seconds (2:30) where we were just basically staring at the wall.
Discussing offline and looking at the build output,
Running tests after this change
make integration 24.43s user 6.61s system 136% cpu 22.741 total
NOTE: Posting this just for preview since checking we have tests running somewhere else that wouldn't be covered by test
When running integration tests, we walk all modules under
services
module. The actual commande executed hasRunning this results in a lot of entries like this
Unfortunately, every "miss" that doesn't have any tests spends about half a second to report there are no tests to run. This has the surprising effect that we spend more time walking tests that are no-op than actually executing the tests. For example, on a local run of
time make integration
Out of those, summing all
no tests to run
counts to 150.5 seconds (2:30) where we were just basically staring at the wall.Discussing offline and looking at the build output,
Running tests after this change