Open ernado opened 4 years ago
Hi, it looks like we can run golangci-lint
on the self repo with all linters enabled and parse time and peak memory from logs. Then check that these values are in the specified range for our CI machines.
And do the same nightly with some large repo like k8s.
So, we can split benches in two categories:
I think both should be run twice (version with and without PR changes) to avoid any noise. The (2) is more obvious to interpret, e.g. "Linting duration increased by 10s" or "Peak memory consumption decreased by 50 Mb" but can be heavy and noisy.
Not sure about (1), but they are integrated in go tooling and have much faster feedback cycle.
Also we can automate (2) on big opensource repo, something like make bench
that will build and compare HEAD
to latest release, displaying formatted changes.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
We should check that MR will not significantly regress linter performance, like:
Currently I'm using fully-codegenerated kubernetes repo for manual regression tests, something like that:
I want to automate this in some way. Probably, some benchmarks?