Open giggio opened 9 years ago
We have a command line tool called StyleCopTester that uses MSBuildWorkspace
to load an arbitrary solution, configures a set of analyzers to run (all, or a particular set of IDs), and optionally applies code fixes (incremental or fix all).
I am planning to extend it to run multiple passes with a control (an analyzer that registers a compilation unit start action and a syntax tree action that don't do anything), and then runs one analyzer at a time to measure the aggregate performance overhead of each analyzer. The goal is identifying analyzers which have a notable overhead when few diagnostics are detected, as this is the "fast path" case that should be well optimized.
We've expanded our automation surrounding this functionality: DotNetAnalyzers/StyleCopAnalyzers#1970
We did the same thing with powershell. You can see how we test CodeCracker against Cecil here. This is really manual. What I think we should be aiming for is something really pluggable, where one could simply point an analyzer library to a set of existing, known to work, projects, and get a result if we got any error.
Right now we depend on independent testing and user feedback to know when we have a problem in an analyzer. We are able to create tests, but not anticipate every single possible problem an analyzer can have. There should be a an easy way to run an analyzer/codefixes project against real OSS projects, and get feedback if something threw. This could be guidelines, tools, scripts, or something else. The end result should be something that runs without user interaction and that we can run after our unit tests run. We could even build a suite of ready-to-run OSS projects, that could be run by a command line script. That should be easily used by any analyzer/code fix project author. We have already started something like this on CodeCracker. We do it with Cecil. You can see the script here. That is a crafted script, for that specific library, but could be easily adaptable to other analyzer projects. We could create a standard for such scripts, what they produce, what parameters they take, etc, and submit them to a repo somewhere. That is an idea for the problem, not the definitive end solution, we should discuss other ways to solve the problem and pick the best one.