Closed jpsim closed 7 years ago
I really like this idea! 💯
I only wish CI was faster to handle this as well as it deserves 😬
We could even make the report with checkboxes so we can mark if the violation is a valid one or not ✅
Could have a "Regressions" section too, which would obviously be more important to address than say a valid violation of an unreleased rule.
We could run the projects against a stable SwiftLint version and the current one. Then we can flag the violations that only happen in the new version (and are from a rule that is also in the stable version) as potential regressions. I say potential because we could been improving an existing rule to flag more cases (such as comma_rule
).
We could run the projects against a stable SwiftLint version and the current one.
If I recall correctly, that's the approach that @scottrhoyt had started to take when working on performance tests (see https://github.com/realm/SwiftLint/issues/376#issuecomment-268702693).
Architecturally, I think that's slightly the wrong approach, and that it'd be preferable to compare master
against the PR, but on the other hand, it's that kind of "search for perfection" that stalled the work on automated performance testing last time around, so I'm a lot more open to tackling this problem in more incremental steps.
But generally, we should try to reuse the same approach for building out both performance tests and OSS checks, since I think they can share some infrastructure in common.
My pointing on using the stable one is that we can mark as regressions just the unreleased rules. master
could have other unreleased rules and that could cause too many violations to manually check if they're legit or not.
My pointing on using the stable one is that we can mark as regressions just the unreleased rules.
master
could have other unreleased rules and that could cause too many violations to manually check if they're legit or not.
This only really works if the project being linted has zero SwiftLint violations, which won't always be the case.
I'd rather filter out pre-existing violations, so that PRs are really tested in isolation.
Otherwise, imagine if we knowingly incur a violation in a PR. That will show up in every subsequent PR, even if the change is unrelated.
Fair enough 👍
I have an idea on how to build this efficiently-ish!
I can set up an S3 bucket that Travis can write to. When Travis builds on master
, it pushes some linter reports in a versioned folder:
swiftlint_reports.realm.io
├── 1bc6c9d
│ ├── Moya.json
│ ├── SourceKitten.json
│ └── realm-cocoa.json
├── 40494e4
│ ├── Moya.json
│ ├── SourceKitten.json
│ └── realm-cocoa.json
└── e4fa18d
├── Moya.json
├── SourceKitten.json
└── realm-cocoa.json
Then on PRs, we also generate these reports but don't push them to S3, and diff them against the most recent commit in the branch that has reports available.
That way, we don't have to run SwiftLint twice on every CI job.
We could leverage this same system for performance tests.
This is nice, but being honest I don't know if it's worth the complexity. In general, linting the files is fast comparing to building and cloning the projects.
It could be more interesting for performance though, as we'd keep a history.
That's true. And in most cases, incremental compilation should help minimize the time spent building two different versions of SwiftLint...
On a second thought, compiling master could be a bit tricky because the CI machine could not have the latest master
commit. I remember that we had some issues with this on Danger.
How crazy would be storing the reports in a git repo? 😳
On a second thought, compiling master could be a bit tricky because the CI machine could not have the latest master commit. I remember that we had some issues with this on Danger.
Not sure what you mean here. The CI workers have the git repo, so you can just do a git checkout master && git submodule update --init --recursive
.
How crazy would be storing the reports in a git repo? 😳
More or less equivalent to the idea of storing them in S3. Either way, you need some sort of write access to somewhere. S3 has the advantage of being files that can easily be deleted after a certain age (if we want). What's in git always sticks around in git history, which could lead to large repos.
Not sure what you mean here. The CI workers have the git repo, so you can just do a git checkout master && git submodule update --init --recursive.
The issue was that some CI providers use a shallow copy, so you'd need to run fetch --unshallow
(https://github.com/danger/danger/blob/65bfd24265c53428d943563186f1f3328bfd67eb/lib/danger/scm_source/git_repo.rb#L390). Should be easy to workaround though.
First take at this done in #1056. It's by no means perfect, it'd be nice to be able to run it locally (without mangling the git environment) and to improve the output in GitHub comments, better linking (quite fragile at the moment), more projects, etc. But we can track these as separate issues if we find that they'd be valuable in the future.
We've already seen in a handful of PRs recently that running on Realm Swift was beneficial in highlighting false positives.
There are a number of other open source projects that use SwiftLint that we could run new rules and bug fixes against, to help minimize the number of false positives.
If we do this, these integration tests should never fail a build, and would ideally follow in Danger/Codecov's approach of posting a constantly-updating GitHub comment reporting the violations in these projects. (
GitHubReporter
anyone?)Proactive contributors could also "fix" these style violations in these OSS projects! Everybody wins 😄.