Open timsneath opened 1 year ago
I wonder if there should be a general "unused function tearoff" lint? cc @pq
Ah. This is very interesting.
The @useResult
annotation may well do the trick?
Some back ground in https://github.com/dart-lang/linter/issues/1888 and https://github.com/dart-lang/sdk/issues/36653.
We are using @useResult
to prevent from writing check(value)
with no expectations.
You could try enabling the unnecessary_statements
lint rule. That should catch cases like this.
https://dart.dev/tools/linter-rules#unnecessary_statements
Ah, this is good (for the isTrue
case). Is there a good reason why this lint couldn't be enabled in lints/recommended.yaml
? It would have certainly caught the examples above.
Is there a good reason why this lint couldn't be enabled in
lints/recommended.yaml
?
I don't thinks it's been discussed for package:lints
. I filed https://github.com/dart-lang/lints/issues/104 to start a discussion.
There has been internal discussion and an interest in adopting the lint, but with the progress stalled out on a way to suppress the lint for false positives, I don't think we have it enabled.
I'll close this issue since I think there isn't anything to do here, and we can track the reccommendation on the other repo.
Just before we close this issue down altogether, can we briefly reconsider whether the right choice is a property or a method for these simple isTrue
/ isNull
-like conditions?
I think the user confusion here is that the obvious analog for expect(foo, isTrue)
is check(foo).isTrue
. If this were a property, then .isTrue()
would automatically fail. The converse isn't true, of course, hence this issue. Changing these parameterless conditions to properties would guarantee no misses here, wouldn't it?
Changing these parameterless conditions to properties would guarantee no misses here, wouldn't it?
It would reduce them, but not eliminate them.
For example, completes is a property, and completion() takes an argument, but this separation is (I expect) an artifact of legacy, not intention. We will not carry the same design forward, and the checks
version of completes()
will take an optional argument (or perhaps return a subject if we decide to revert a recent change)
A quick note on design priorities - consistency in naming, arguments, or behavior with matcher
to ease the migration is a vastly lower priority than the ergonomics of writing new tests going forward. The places where I carried forward the naming and behavior are places where the experience in matcher proved the utility.
On the specific topic of using isTrue
over isTrue()
- there has been request for that even outside of migration concerns and it's not out of the question to switch. I think @lrhn is a big proponent of that change.
I like the consistency of the current design. If you are checking something, you call a method. If you are finding some derived property, you call a getter.
I can reopen this issue to see if there is further support for changing to getters.
Thanks so much, Nate!
This is where I think we're not aligned: "consistency in naming, arguments, or behavior with matcher to ease the migration is a vastly lower priority than the ergonomics of writing new tests going forward". We've reached a point in the product lifecycle where this kind of change is much more costly to the ecosystem than it was a few years ago.
It may yet be true that 90% of the apps to be written with Flutter haven't been started, but there is still a LOT of code out there today. We have two choices:
expect()
code and documentation around forever, and aspire that customers never migrate any of that forward, which creates debt forever: not just for code but for the ecosystem (we have to explain to every new user what the difference is and which to choose, we have to support both in IDEs and tooling, we have to deal with bifurcation of training and docs, etc.)check()
over time, in which case we owe them a pleasant migration experience.
I accidentally changed:
to:
(instead of
check(foo).isNotNull();
)This compiles and gives no warning, but of course does not actually execute the test.
Is there a way we can detect this kind of user failure?