Closed mavasani closed 2 years ago
I'd like to tweak:
If an analyzer can produce known set of false positives,
Almost all our analyzers can hit some false positives. The bar we've had here is that we have a reasonable belief that these false positives should be very rare and unlikely to matter in reasonably written programs.
As an example, many of our analyzers assume that a property call is idempotent. Calling it one time versus many should have no effect on almost all reasonably written programs. That said, there are cases where we might produce a false positive for cases we do not feel are either realistic, or are so rare they are vastly outbalanced by the value to the majority.
@CyrusNajmabadi I agree with your stance. I think it is reasonable to err on the side of some false positives when majority of users/scenarios get benefit from the suggestions/refactorings. We need to be little more cautious for false positives from code quality suggestions with ...
, but I would have no concerns with hidden diagnostic analyzers and refactorings to have lower bar for false positives.
@mavasani What if we had an analyzer runner that ran at very low priority in the background and would collect diagnostics for the unconfigured style and quality analyzers. If there was a significant density of diagnostics reported for an analyzer then perhaps, we could pop up and say, "📎 - It looks like so and so analyzer could make your code more readable/higher quality. Would you like more information, to enable analyzer, or to disable analyzer?". They could then click through to get more details about the analyzer and fixer and add the analyzer to their configuration (as enabled or disabled).
If running in the background and popups are not desired then perhaps there was a toolwindow that could be opened that would run these analyzers and give information about how it might impact their code.
Conclusion of the design meeting: This a complex issue and we may need to have:
Overall we need to have separate meeting to discuss it each case. I
@mavasani Since this will need more discussion I will not move it to completed and put it under 'Needs update'
@JoeRobich Your idea seems to match @vatsalyaagrawal's ideas about doing offline/background analysis with disabled by default analyzers and somehow surface those results to the users as a discoverability point. I think that is a very nice idea and compliments this space and we are going to try out things in that space.
We decided not to make any changes here for .NET5 Release.
We now have a pretty good triage/approval process in place for .NET SDK CA analyzers (runtime team) and IDE CodeStyle analyzers (roslyn team). Going to close this out unless we hit these issues again.
Starting with .NET5 SDK, both code style (IDE analyzers) and code quality analyzer (CA analyzers) packages are installed with the .NET SDK, which makes the user experience consistent inside Visual Studio for .NET users:
EnforceCodeStyleOnBuild
needs to be explicitly set totrue
, in future it might be turned on by default)Given that the end user would no longer distinguish between a suggestion and a refactoring coming from an IDE rule (Roslyn repo) or CA rule (Roslyn-analyzers repo), we need to make sure that we have consistent bar set for both repos in terms of whether a rule should be enabled by default as a suggestion, refactoring or disabled by default. We are getting lot of new CA analyzer suggestions off late, and it would be good to have a set bar for the default severity/enabled state for these analyzers.
Proposal
Below is my proposal for bucketing:
IDE refactoring
This includes both hidden diagnostic analyzer + code fix AND code refactorings
...
in the editor or an error list entry), we are not driving the user towards making this change, but just offering them quick actions to avoid manual work. For example:Use literals where appropriate
IDE suggestion
Info severity diagnostic with a
...
in the editor to draw user's attention and an explicit message entry in error listAbstract type constructors should not have public modifier, instead it should be protected
Public, non-abstract, attribute types should be marked as sealed
Remove unused parameter
or IDE0051:Remove unused private member
An analyzer would be a candidate for a suggestion if it flags a potential functional bug in user code, but is not a strong enough assertion to be a build warning by default to break build. For this case, it will be fine even if the analyzer does not have an associated code fix, because it might need non-trivial changes to user code to fix the issue. Goal is to draw user's attention that something is wrong with the current code, they may have to read up more and identify the correct fix themselves. For example:
Use ValueTasks correctly
Do not create tasks without passing a TaskScheduler
.I believed this bucket would only apply to CA rules, it is highly unlikely that above will apply for any IDE style analyzers. We can take couple of routes for this bucket of rules:
Aggressive: It is fine to let users know about potential mistakes/improvements in their code, even if the fix is not trivial. They have the option to ignore the suggestion or suppress it in editorconfig if they do not value it. If it turns out too noisy, they will report an issue.
I would personally recommend the aggressive route as I feel this is the core benefit from having CA rules in the box - help user write better code or introduce them to new code quality areas that they are unaware of.
Disabled by default
If an analyzer can produce known set of false positives, where the reported diagnostic is invalid, say for example it is a heuristic based analyzer, then it must be disabled by default. For example, few ported CA rules use naming heuristics like look for specific symbol names.
I believed disabled by default bucket only applies to CA rules, it is highly unlikely that above will apply for any IDE style analyzers.