Open felix91gr opened 5 years ago
Our cyclomatic complexity lint already adresses a few of the issues mentioned in the cognitive complexity paper. E.g. a match
has a CC of 2
regardless of the number of arms.
I wonder if it would make sense to simply rename our cyclomatic complexity lint to cognitive complexity and start integrating a few more of the suggestions from the linked paper
Yeah I think that would be better.
Aside: let's call Cyclomatic Complexity as "CyC" and Cognitive Complexity "CoC"? Since both of them can be called CC, but are not the same thing x3
E.g. a
match
has a CC of2
regardless of the number of arms.
Here I assume you meant CyC. If so... does the CyC lint in Clippy give all match
's a score of 2? Shouldn't it depend on the number of branches of the match
, since CyC measures...
"(...) the number of linearly independent paths through a program's source code."
(Quote from Wikipedia's first paragraph on CyC.)
I wonder if it would make sense to simply rename our cyclomatic complexity lint to cognitive complexity and start integrating a few more of the suggestions from the linked paper.
In general, for introduction purposes, I think it makes sense! You'd have the same lints as always, but the current CyC lint becomes more powerful. Users just get an upgrade of their already great tool.
But in the long term I wonder if it might be better to keep them separate. CyC is better as a metric of how hard a piece of code is to test, while CoC is better as a metric of how hard a piece of code is to understand. (Code Climate's article comparing both is very good to illustrate this point)
Also I think we'd prefer (please correct me if I'm wrong!) to keep the lints smaller, because it would allow a user to use Clippy to tackle smaller, more approachable problems.
Shouldn't it depend on the number of branches of the match,
Right, @oli-obk's point is that our CyC lint doesn't actually measure CyC, it measures something in between CyC and CoC, which is what folks found to be more useful. Moving it all the way towards CoC is fine by me. As a linter engine "hard to test" is far less useful and actionable advice to give.
On Thu, Feb 21, 2019, 11:21 PM Félix Fischer notifications@github.com wrote:
Aside: let's call Cyclomatic Complexity as "CyC" and Cognitive Complexity "CoC"? Since both of them can be called CC, but are not the same thing x3
E.g. a match has a CC of 2 regardless of the number of arms.
Here I assume you meant CyC. If so... does the CyC lint in Clippy give all match's a score of 2? Shouldn't it depend on the number of branches of the match, since CyC measures...
"(...) the number of linearly independent paths through a program's source code."
(Quote from Wikipedia https://en.wikipedia.org/wiki/Cyclomatic_complexity's first paragraph on CyC.)
I wonder if it would make sense to simply rename our cyclomatic complexity lint to cognitive complexity and start integrating a few more of the suggestions from the linked paper.
In general, for introduction purposes, I think it makes sense! You'd have the same lints as always, but the current CyC lint becomes more powerful. Users just get an upgrade of their already great tool.
But in the long term I wonder if it might be better to keep them separate. CyC is better as a metric of how hard a piece of code is to test, while CoC is better as a metric of how hard a piece of code is to understand. (Code Climate's article https://docs.codeclimate.com/docs/cognitive-complexity comparing both is very good to illustrate this point)
Also I think we'd prefer (please correct me if I'm wrong!) to keep the lints smaller, because it would allow a user to use Clippy to tackle smaller, more approachable problems.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/rust-lang/rust-clippy/issues/3793#issuecomment-466098315, or mute the thread https://github.com/notifications/unsubscribe-auth/ABivSIv5yFX2t1wDbBeZh-n0tFCFsOasks5vPtypgaJpZM4bGkvY .
it measures something in between CyC and CoC, which is what folks found to be more useful.
Ohhhhhhhhhhhhh, this makes much more sense to me now.
Moving it all the way towards CoC is fine by me. As a linter engine "hard to test" is far less useful and actionable advice to give.
If there's consensus on this, I'm willing to help :)
Yea, pure CyC has been found to be pretty useless in Rust as you often use matches not to run different code but more as a lookup table for values.
Yea, I think we can just go ahead with this. As a first step I would nuke the cfg function call and the math resulting from it and just keep the numbers from the visitor. Then, change the visitor to match the CoC rules. If I understood the paper correctly, it's all syntactical, so we should probably move to a pre-expansion-pass. IMO macro calls reduce complexity, even if macro definitions increase it
Oh, btw, feel free to bug me about this here, on discord or on irc. I'm very interested in any musings you have on this.
With Oliver (@oli-obk) we're now working on this. If anyone wants to chip in, reach us ^^ we plan on making an MVP first and leave the space for later expansions on more fine-grained or complex scenarios, as the use-cases start to appear.
A Rust function. (Option for the future: a Rust module, aggregating the CoC of its functions)
A Cognitive Complexity score that loosely fits the Specification laid out in the paper.
From now on, (I can change it later if people prefer I copy some of the paper here!) this text assumes the reader is familiar with the paper.
The following structures increment the nesting level:
if
, else
, else if
if let
match
for
, while
, loop
For each of the following constructs, there is an increment in the CoC score of (1 + nesting level
).
if
match
for
, while
, loop
For each of the following constructs, there is a constant increment of 1 in the CoC score.
break
, continue
&&
, ||
, &
, |
, ^
)The following constructs are explicitly exempt of increment, since they replace and simplify other constructs.
?
operator for error propagationif let
statementHere are things that we're explicitly postponing until we have the tools or the know-how necessary to implement them.
#if
should at least add +1 to the nesting level.I need help with Rust specific features I might be missing. I'm still relatively new to the language, so any help is appreciated! :pray:
* Ternary Operator (`a = q ? b : c;`)
doesn't exist in Rust ;)
* Nested functions
Are these a problem? How is it less complex to have the function outside?
* Is there conditional compilation in Rust? That should increment the nesting level as well.
we have cfg
attributes, but clippy doesn't see them right now. There are a bunch of things we want to do with them, so ignore this until we figure out how to get them.
* Sequences of binary logical operators
both short circuiting and non-short-circuiting are relevant here imo
Also, if I read the paper correctly, only changes of the binary operator in a sequence cause an increment. So x && y && z
is fine, but x && y || z
will increment
I need help with Rust specific features I might be missing. I'm still relatively new to the language, so any help is appreciated! pray
Since everything is an expression, I think having any nesting expression inside e.g. a binary operation (x + if cond { y } else { z }
) or a function call (foo(x, y, match bar { X => 42, _ => 0 })
) should also cause increments.
break
from a loop
seems fairly nonproblematic. Not sure if that should increase the CoC even further than the loop
by itself did.
doesn't exist in Rust ;)
Haha 😆 my bad, I'm definitely still learning.
- Nested functions
Are these a problem? How is it less complex to have the function outside?
In the paper, nested functions are considered part of the body of the function itself. They add no increment, but they are like a new if-block in that they add nesting. I think that's because they are being treated as closures, capturing the surrounding context.
Nested functions in Rust can't do that ~right?~ For all intents and purposes, they might as well be an external function, but that can only be seen from the scope of the containing fuction. ~Right? If so,~ I think we could for the most part treat nested functions as an external function entirely, having its own score and not affecting the containing function's score at all. (Edit: someone already told me the answer to the capturing questions)
we have cfg attributes, but clippy doesn't see them right now. There are a bunch of things we want to do with them, so ignore this until we figure out how to get them.
Oki doke!
both short circuiting and non-short-circuiting are relevant here imo
Same.
Also, if I read the paper correctly, only changes of the binary operator in a sequence cause an increment. So x && y && z is fine, but x && y || z will increment
Almost. x
gives +0, x && y
gives +1 and so does x && y && z
. x && y || z
gives +2. It's +1 as soon as a binary operator is introduced, and +1 for every time it changes.
Since everything is an expression, I think having any nesting expression inside e.g. a binary operation or a function call should also cause increments.
Good point! We do have to consider this from the perspective of expressions. I think your examples are right. Those expressions introduce abstraction, and that should be accounted for :)
break from a loop seems fairly nonproblematic. Not sure if that should increase the CoC even further than the loop by itself did.
That's a fair point. Hmm. Since there can be many break
s inside a loop
, maybe the loop
should add just 0, putting its cognitive cost mostly in the breaks that stop it 🤔
break
from aloop
seems fairly nonproblematic. Not sure if that should increase the CoC even further than theloop
by itself did.
I think loop { ... break ... }
is often considerably less understandable than iterators since the latter declaratively describes what happens while the former describes how it happens. loop
+ break
invites spaghetti in most cases in my view and is rarely the right choice so making it more encouraged seems sub-optimal.
As for nested functions, I think they add to rightward drift, and that is often a key factor in making things harder to understand. Rightward drift is certainly something I try to fight as part of language design.
Another thing to consider is unsafe { ... }
; I would try to assign it an extra penalty and additional penalties for both long and deep unsafe { ... }
blocks as it risks making more behavior use relaxed rules as well as incurring more proof obligations to check.
Another thing to consider is
unsafe { ... }
unsafe
blocks certainly require much more care when reading the code. Thank you for reminding us! I will include it in the list of things left to discuss.
As for nested functions, I think they add to rightward drift, and that is often a key factor in making things harder to understand. Rightward drift is certainly something I try to fight as part of language design.
I'm implementing this atm. Do you think that the score of Nested Functions should be added to that of its mother function? Or should they only be scored separately?
I find NFs very useful. However, scoring them completely apart from their mother function seems not quite a great solution, since the complexity of a mother and one of its nested functions is much greater than two comparable functions that are 100% separated. This is because having a mother and a NF means that their design is intertwined, and that you need to understand both in order to understand the whole (the are, at the end of the day, one function).
At the same time, the amount of complexity in a function tends to be reduced in most cases after adding a NF, because there's code repetition that the NF is consolidating in one unit. Scoring the NF once (and giving it a starting nesting bonus to its score) should reflect that in most cases, unless the NF itself is very complex.
Thoughts? (all)
More fodder for thoughts. Anyone who's interested in this topic, please feel free to reply :)
unsafe
block to understand/work with for you than a normal block?I'm implementing this atm. Do you think that the score of Nested Functions should be added to that of its mother function? Or should they only be scored separately?
I would treat NFs as if they secretly were closures in general. While breaking things into separate functions is great, NFs are not individually testable and so a free helper function can be preferable (depends... maybe the helper makes no sense as a separate logical unit...).
How much harder, in terms of percentage, is an
unsafe
block to understand/work with for you than a normal block?
I think there's a multiplicative cost here. So instead of giving maybe k + stuff_inside
I would consider giving it k1 + (k2 * stuff_inside)
.
I would treat NFs as if they secretly were closures in general.
The current draft implementation does exactly that ^^
I think there's a multiplicative cost here.
Nice. I was thinking the same thing :D I'm glad I'm not the only one.
But it makes a lot of sense: every expression in an unsafe
block is just like a normal one, only a bit harder. You could factor that "bit" out and multiply it by the score that the entire unsafe
block would have if it was a normal block. ^^
Could you quantify how much more complex a function is for you if it has 1 lifetime parameter, compared to having none? What about 2, 3, 4, and more?
This probably applies to type parameters as well but less so. Yet, I would try to avoid penalizing generic code over less generic code since that's often nice for code reuse. Maybe start adding increments at 3? I'd give lifetimes higher increments than type parameters.
Maybe start adding increments at 3? I'd give lifetimes higher increments than type parameters.
Seems fair :)
It's not just statements inside an unsafe block that increase the difficulty in understanding the code, but also statements ouside an unsafe block. In fact an unsafe block can essentially require you to understand the entire module.
I would like this lint to be highly configurable. Since there are much more things to consider (unsafe
blocks, nested functions, macros, ...) than in the cyclomatic complexity lint, it is probably not really helpful by configure this lint with only one threshold. IMO the best case would be if every part of this lint gets it's own threshold / is configurable on it's own.
For nested functions, for example, you should be able to write:
cognitive-complexity = [
(nested-functions, 2),
(livetime-parameter, 2),
.. // more configs
]
and then the lint will increase the cognitive complexity only for nested functions with a nesting level greater or equal 2.
I don't really know how to implement this, with our current configuration system. We probably need an own configuration (sub-)parser for the cognitive complexity lint config.
After a configuration like this is possible, we can set the threshold of every configuration pretty low and wait for complains to figure out sensible default values for every configuration.
It's not just statements inside an unsafe block that increase the difficulty in understanding the code, but also statements ouside an unsafe block. In fact an unsafe block can essentially require you to understand the entire module.
Hmm. That is true. How would you suggest scoring them? Just a high constant (e.g. 20) if there is at least one unsafe
block in the function, maybe?
@flip1995 you raise a fair point. We were thinking about having an MVP running and seeing where we could go from there. Maybe a finer-grained configuration for CoC can be added then after we understand it better in the field :D
I think for now you can just pessimize unsafe
blocks by increasing the counter by 1
and we can revisit once the unsafe coding guidelines working group has some rules laid down.
I'm working on making the scores more fine-grained. Lemme show you what I mean.
The paper works on a number of syntactic language constructs. However, now that I've implemented all of them, I still have left around half of the ExprKind
s in rustc
's Abstract Syntax Trees. For deciding whether or not to propose scoring them in any way, I've asked myself the following:
These are the answers that I've got:
InlineAsm
(too complex, deserves its own research), Async
(unstable) and Box
(unstable),I would leave these three outside of the scope of this work for now.
Ret
and Yield
are both similar to Break
, and Tup
and Array
feel as complex as Call
and MethodCall
. In total, this is the list:
Ret
Yield
Cast
Struct
Tup
Array
Range
Paren
sure adds some weight, but is way lower than a function call. Same for Index
, Field
, Unary
, etc. This is the list:
Paren
AssignOp
Assign
Index
Field
AddrOf
Unary
I would score the as complex items in similar ways as the ones that appear in the draft, remembering to classify them as "Nesting-Dependent" or "Nesting-Independent". The less complex ones, I'd give a much lower score (a fifth or so) than the baseline, since they increase the complexity by smaller steps.
(Points 3. and 4. continue in next comment)
Ret
and Yield
: they change the flow of the function. They are harder to understand the deeper you are in the flow tree of the function. For that same reason, I'd not only score these as N-D, but also change the current scoring for Break
and Continue
(which are considered N-I in the paper) to be N-D.Assign
and AssignOp
: the change they make on the state of the function depends on where they stand on the control flow. They seem like they could be N-D but I'm not so certain.All the others ExprKind
are pretty much Nesting-Independent for me.
ExprKind::Block
s that incremented the nesting value, and that's how it's currently implemented, but that might not be the full story. We could also consider these constructs to increase the nesting level for the expressions they carry inside:
Call
, MethodCall
, Mac
: their parameters can be and can contain expressions, so inside that "scope" we could think there is a nesting happening.Paren
Tup
: (a, b, if c { 5 } else { - 6 })
Index
: foo[...]
Range
: a..b + 5
Array
: [a, b, [c, d && e]]
Extras: I've left out some ExprKind
that I still don't understand fully. Any help with understanding/scoring them is really appreciated:
Repeat
: I don't know how to score this, since I've seen it very little.TryBlock
: haven't used it, please help ^^ObsoleteInPlace
: I have no idea, the docs seemed to ignore it.I have left out Type
and Path
as well. I'm currently studying those ^^
Repeat
is just like Array
in my bookTryBlock
should wait until we see wider usage so we can actually have a look at real world situationsObsoleteInPlace
: I think that is just there so the parser doesn't choke on the syntax, ignore it completely, rustc will errorGeneral note: any divergence from the paper should be documented.
any divergence from the paper should be documented
I agree very much. I want to document all of this, even if it takes me 2+ weeks to do it after the implementation is done :)
Hi all, sorry for the lack of activity. I have news :D
I've finally started my internship! It's at a local foundation here in Chile, they do super cool things and the people are awesome <3 However, that means that my time will be much reduced for the following months. I should have at most like 30% of the time I've had before. Progress will still exist, but at a lower pace. I hope y'all understand ^^
I really want to see this lint through, and for the moment my intent is to finish it. Have a great week! --Félix
No problem, thanks so much for all your work so far! Congrats on your internship!
First off, congrats on your internship! :tada:
If you want help with specific tasks of this lint, I'd be happy to help, if you "mentor" me. Just delegate the tasks to me that are still open for implementation and you can't find the time for.
Hi you guys, here's an update. Thank you for not pinging me in the meantime ^^
I had a couple of really rough weeks irl at the beginning of May, and had to ask for a break at work to work things out. Now I'm doing better, have come back to work and I think I have a steady pace there. I wouldn't call the issue I had a done thing yet, but I think the prospects are good.
If all goes well, I want to try coming back to this. I'll write below what's left before the next step, and @oli-obk will correct me if I forgot something :)
And @flip1995: I'd be glad to 🌈 tell me if there's something you're interested in from the list :3
I have just come here just to tell how amazing this is. Please, finish the work on it, I can't wait any longer, it is itchy to try it asap.
The complexity level in the clippy can be set to zero, so even a main with a hello world string printed introduces a warning. Can we check for the value of zero? Or does it make any sense or a use-case?
Yeah that will be possible. You can configure the threshold as low or as high as you want. But a threshold of 0 would mean: "Every code is to complex, so please don't write any code", which wouldn't be that useful, would it? 😄
There's still a WIP PR (#3963) where you can contribute, if you want that feature rather sooner than later 😉
And @flip1995: I'd be glad to 🌈 tell me if there's something you're interested in from the list :3
Since I wrote that, I'm busy with my master thesis and will be for a few months. So I don't have the time to contribute to this project either :/
Hi everyone! It's been a while.
I've been researching this topic more and more on my free time. I also contacted an expert on static analysis to bug them about it, and they were very kind about it. Anyway, I digress.
This research has led me to conclude that this lint, in two different ways, is a mistake. I want to write about it more comprehensively than this, but if you'll believe my words, I really think it is.
First: what do I mean by two different ways?
There are two paths that I would like to follow. The first one is retracting my own proposal. The second one is blocking this entire lint family until we get much better analysis tools and understanding of code analysis than what we have today in both Clippy and the static analysis community.
Let's start with why I would like to retract this proposal.
Second: why do I want to reject my own proposal?
Let's take a look at the "paper". Here's what I found while digging into it:
- Ignore structures that allow multiple statements to be readably shorthanded into one What do you mean, so hiding complexity is okay?
- Increment (add one) for each break in the linear flow of the code Why one? Where does that come from? None of this comes from any formal analysis of complexity.
All in all, it's pretty clear to me that the lint that I was planning to emulate is bullshit.
But there's more to this story. Why would I want to not only reject my proposal, but also mark the idea of Cognitive Complexity linting a non-goal in itself?
Third: why go beyond and reject the idea?
You'll have to read more to get me here, I'm sorry. First go here.
The first problem with any Cognitive Complexity lint is a lack of technology: in order to really assess how complex code will be to read, we would need to emulate how a person processes code, or hack around it at least. And how hard could that be?
Well, as mentioned in the article above, the information of a program’s design is largely not present in its code. There are isomorphic programs that are completely different in cognitive complexity, just because of their context. What does x > 52
mean? Is it a buffer restriction, or an ASCII character boundary? We need to be able to distinguish between those two programs. Therefore, we at least need to be able to parse and understand basic documentation.
Can we do that today with Clippy? No, we cannot.
And we should not promise that, either. Clippy is awesome, probably the best static analyzer that I've seen. But there are limits to how good of an analysis Clippy itself can do. We should we humble and accept the reality of the technological hardness of Cognitive Complexity.
I want to thank @Centril and @flip1995 for being so kind to me <3, @Manishearth for their patience with my git scrambling, and @oli-obk for taking me seriously and discussing all sorts of things while guiding me through Clippy :3.
As a last point? Maybe? I want to draw you to this paper. This is a real paper, and it's about static analysis and at the end, about a more real attempt at measuring cognitive complexity. I would like to understand it more, because it seems to me that the shortcomings in Clippy could be reduced with those kinds of techniques.
Hi everyone! It's been a while.
I've been researching this topic more and more on my free time. I also contacted an expert on static analysis to bug them about it, and they were very kind about it. Anyway, I digress.
This research has led me to conclude that this lint, in two different ways, is a mistake. I want to write about it more comprehensively than this, but if you'll believe my words, I really think it is.
First: what do I mean by two different ways?
There are two paths that I would like to follow. The first one is retracting my own proposal. The second one is blocking this entire lint family until we get much better analysis tools and understanding of code analysis than what we have today in both Clippy and the static analysis community.
Let's start with why I would like to retract this proposal.
Second: why do I want to reject my own proposal?
Let's take a look at the "paper". Here's what I found while digging into it:
1. It's a _whitepaper_, **not** a _paper_. Which means it's much more about marketing and reporting than about science. a. It has **no empirical base**: you can see that it has no real-world testing, no user feedback, no implementation, nothing of the sort. b. It has no **theoretical base**: the algorithm is derived only from touchy feely things. From its _Basic Criteria and Methodology_ section: > 1. Ignore structures that allow multiple statements to be readably shorthanded into one > What do you mean, so hiding complexity is okay? > 2. Increment (add one) for each break in the linear flow of the code > Why **one**? Where does that come from? > None of this comes from any formal analysis of complexity. 2. SonarSource makes it [sound awesome](https://blog.sonarsource.com/cognitive-complexity-because-testability-understandability), but the damned thing has almost identical [implementations](https://github.com/search?q=topic%3Astatic-analysis+org%3ASonarSource+fork%3Atrue) across the different languages in which they offer it.
All in all, it's pretty clear to me that the lint that I was planning to emulate is bullshit.
But there's more to this story. Why would I want to not only reject my proposal, but also mark the idea of Cognitive Complexity linting a non-goal in itself?
Third: why go beyond and reject the idea?
You'll have to read more to get me here, I'm sorry. First go here.
The first problem with any Cognitive Complexity lint is a lack of technology: in order to really assess how complex code will be to read, we would need to emulate how a person processes code, or hack around it at least. And how hard could that be?
Well, as mentioned in the article above, the information of a program’s design is largely not present in its code. There are isomorphic programs that are completely different in cognitive complexity, just because of their context. What does
x > 52
mean? Is it a buffer restriction, or an ASCII character boundary? We need to be able to distinguish between those two programs. Therefore, we at least need to be able to parse and understand basic documentation.Can we do that today with Clippy? No, we cannot.
And we should not promise that, either. Clippy is awesome, probably the best static analyzer that I've seen. But there are limits to how good of an analysis Clippy itself can do. We should we humble and accept the reality of the technological hardness of Cognitive Complexity.
I want to thank @Centril and @flip1995 for being so kind to me <3, @Manishearth for their patience with my git scrambling, and @oli-obk for taking me seriously and discussing all sorts of things while guiding me through Clippy :3.
As a last point? Maybe? I want to draw you to this paper. This is a real paper, and it's about static analysis and at the end, about a more real attempt at measuring cognitive complexity. I would like to understand it more, because it seems to me that the shortcomings in Clippy could be reduced with those kinds of techniques.
Personally, I did find some real rationale behind this paper draft. I know it hasn't been published and does not refer to any research, but I agree with most of the points there. Anyway, there's been the cyclomatic complexity for a long time, and this draft paper referred to it as well. And this cyclomatic complexity does not imply readability and maintanability, so can't be used as "cyclomatic complexity" any way. So this draft paper and the "cognitive complexity" principle is still something we have now, there isn't anything better now, and I doubt will ever be, if someone from some university, or a scientist(?) decides to really work hard on this.
Given all of that said, by you and me, I still think we should let the lint live as it is. Once anything better occurs - we could implement it as well, or when the cognitive complexity becomes oficially recognized, we could change this lint or add new one like with _v2
suffix at the end. I am using this lint and it is pretty much useful to me. My code is checked and has become much cleaner. I like functional languages, functional style, and I prefer this to be checked automatically. What I've seen so far using this lint, has only pleased me.
Also, nothing prevents us from discussing here all the ways to improve it even so. So that we would implement something even better, than was suggested by the paper draft.
there isn't anything better now, and I doubt will ever be
I believe there will be. Give humanity 20 years and we will have automatic, human-like reasoning about code.
There is much stronger tech than what Clippy has in the research circles already. Things can and I think, will, go beyond the current state of affairs.
cyclomatic complexity
Yeah. Cyclomatic Complexity is a really bad metric. It shouldn't be used for linting, except for setting up a test suite. It really has no meaning with regards to the understandability of code.
Also, nothing prevents us from discussing here all the ways to improve it even so.
I think you're right, but I don't think any of what's implemented here or proposed in the Draft comes close to improving it.
We aren't really measuring code complexity at all. If we were to enhance it and be honest to ourselves about it, we really need to start using something better than a structural pass over the code.
Given all of that said, by you and me, I still think we should let the lint live as it is. Once anything better occurs - we could implement it as well, or when the cognitive complexity becomes oficially recognized, we could change this lint or add new one like with _v2 suffix at the end. I am using this lint and it is pretty much useful to me.
That is awesome. If you do find that your project is better off from this lint, great.
But even still, I think we should remain humble. This lint's name doesn't say what it is. It can't measure Cognitive Complexity. We should at least change it to "Structural Complexity" or something similar. That would help a lot, in my opinion. That and/or changing it to Pedantic.
I agree with all you've said, and
This lint's name doesn't say what it is. It can't measure Cognitive Complexity.
Also sounds reasonable. I can't think of any other name though. Also, I thought, this lint has already been marked as "pedantic".
Thank you for listening @vityafx 😊
Also: I checked it just now on the lint list, and CoC is listed under "Complexity" (as opposed to "Pedantic"): https://rust-lang.github.io/rust-clippy/master/#cognitive_complexity
Maybe we should create a working group for defining the cognitive complexity ourselves? Why not? Should we be scientists for this? We are experienced people, and we have all the cards to play this game.
I mean you can. Why not, right?
I will not accompany you though. Because:
We are experienced people, and we have all the cards to play this game.
I don't think we have all the cards. We need a better, much better understanding of the human mind in order to do this.
If you wanna dive into it, by all means do! But I have retired myself from this endeavor at the moment.
Brain hurts.
Hi all, I'd like to propose (and implement, if someone can mentor me a bit!) a lint for Cognitive Complexity. It's a way to measure the cognitive load that a particular function puts on the reader.
I think that having an upper bound on the allowed cognitive complexity of functions would help ensure that complex functions are split into simpler, more maintainable ones. It would also help to complement the role that the Cyclomatic Complexity lint currently has.