Closed KristofferC closed 2 years ago
Thank you for pointing this out, I'll investigate. I haven't been happy with these tests since the time of writing, but I have never hit even close to those limits either. The idea is that this package should never generate code which is worse than existing implementations for special cases. I'd be happy to test for generated code directly, but that seems more involved. Do you advise against performance tests in general for a package testsuite?
Do you advise against performance tests in general for a package testsuite?
I think it would be better to use something like https://github.com/tkf/BenchmarkCI.jl. But you could also choose to run the benchmark tests only when they are explicitly enabled (and you know they will run on a reasonable machine). So you could e.g. look at an environment variable and run them then. That would mean they would not run on e.g. PkgEval (which can be very slow since it tests a bunch of packages in parallel.)
Thanks, for now I've disabled those tests when running in PkgEval.
Benchmarks like https://github.com/stev47/StaticKernels.jl/blob/bf238cab33d456dd18563baa415327eb6e27f06a/test/runtests.jl#L213 should in general not be in a test suite since they will fail if run on a different machine (slower) than what the arbitrary limit was set to. This causes e.g. noice in the PkgEval suite where packages are tested on new Julia verisons: https://s3.amazonaws.com/julialang-reports/nanosoldier/pkgeval/by_hash/f51f24c_vs_2ca8b0c/StaticKernels.primary.log