rust-lang / rust-clippy

A bunch of lints to catch common mistakes and improve your Rust code. Book: https://doc.rust-lang.org/clippy/
https://rust-lang.github.io/rust-clippy/
Other
11.32k stars 1.53k forks source link

Re-engineering clippy for Performance #1380

Open llogiq opened 7 years ago

llogiq commented 7 years ago

While looking at reducing memory cost for compiling clippy (and reducing LintPasses in the process), I came upon an idea that could improve performance quite a bit. Currently, we're doing the same thing over and over – for example there must be at least eight macro checks for every expression visited, and we look at visibility and/or parents at least three times. If we could make clippy's architecture more streamy and cache intermediate results, we could reduce those checks and get some nice performance improvements.

Now you may be asking: If this is so great, why haven't we done it already? First, there's obviously a cost. In this new model, lints would be much more coupled. There would be only two passes (one ClippyEarlyPass and one ClippyLatePass – or perhaps we could even reduce this to one pass that implements both early and late pass?), which would implement the logic for all lints, calling out to specific checks. For example, we'd visit an expression, and only go into certain lint methods depending on the type. We could also cache some information (e.g. parent, macro expansion information) within the Pass object, lazily loaded when needed and reused while visiting the node.

Perhaps we could reduce our current lints to consistently named functions and have a script that generates the LintPass by collecting the existing functions and emitting calls. This would probably also reduce compile time – and the chance for errors because of missing passes.

killercup commented 7 years ago

Is clippy actually slow? I mean, what you suggest here sounds pretty cool (generating LintPasses is an interesting idea!), but how would you measure if the refactoring was successful? Do you have some perf numbers in mind?

FYI, the diesel crate (i.e., the sub dir diesel in diesel's repo) currently has 17,131 lines of Rust code, and cargo clippy begins to output messages after less than a second on my machine.

Manishearth commented 7 years ago

Clippy is fast to run. It has a slow compile time.

llogiq commented 7 years ago

True, clippy isn't (usually) slow, but I'm thinking about IDE users who'll want to lint their code at least on every save, better as often as CPU time permits. Running in a second vs. in 900ms would be a noticeable difference in that scenario, and I'm not asking about wonders here.

Otherwise, deduplicating our code, even if it couples our lints more tightly, could be a win for maintainability after all.

Manishearth commented 7 years ago

Perhaps we could reduce our current lints to consistently named functions and have a script that generates the LintPass by collecting the existing functions and emitting calls.

This feels like the old proposal I had to just have a ClippyExprLint trait that only does check_expr, and have a clippy pass that goes over all of these.

panstromek commented 5 years ago

True, clippy isn't (usually) slow, but I'm thinking about IDE users who'll want to lint their code at least on every save, better as often as CPU time permits.

Not sure if this issue is still relevant but I just wanted to add that this is definitely on point. IntelliJ-Rust uses Clippy to annotate code and runs it on almost every keystroke (with some debounce, I guess). Any little performance gain would make a difference there ;). Not like it's bad, it's great actually, but for comparision, WebStorm is doing the same with ESLint and that feels a bit more instant.

blyxyas commented 1 year ago

Very interesting! Pretty hard, but I'll see if I can do something, as Clippy has grown over 500 lints since Aug 2016, so this is pretty important now (and will be even more in the future).