There's a very interesting feature for elm-review that I've kept in the closet so far and not released yet (I was fine-tuning it and then had to stop working on it for a while).
The feature is being able to extract information from elm-review's analysis. Currently elm-review is only able to report errors, and that's it. But it has I think a wonderful API to go through a project and extract information, and I think we could make use of it to get information from our project.
It's basically just like a normal rule, but it has an additional |> Rule.withDataExtractor dataExtractor function, and this dataExtractor is a simple ProjectContext -> Json.Encode.Value function. The idea is that you collect anything that you want using the regular elm-review visitors, and then you export that to JSON format.
To access that information, you then run elm-review with --report=json. If your rule name was Some.Rule, then you'd have a JSON object with a key "Some.Rule" whose value is what you extracted to a JSON value.
I think there are a lot of potential use-cases.
We could have community-created graphs for our project. For instance you could collect the imports and create a DOT graph for the project, as Elm Analyze used to do. Or you can draw one for specific functions. I would love to see graphs of things like which Msg trigger which other Msg in an update function, so you can have a much clearer vision of what happens in a module.
You could track which CSS classes have been used by your project, and then compare that with your CSS files to clean them up.
I have been using this to collect security issues. Given a vulnerable function, find out which other functions use it and are vulnerable by transitivity.
If we support creating files in automatic fixes, we could use the information to generate Elm code (manually or using elm-codegen) based on data that is found in our Elm files.
With #138, we'd be able to collect a lot of information, and the possibilities become endless.
The combination of things to test for (local errors, errors for multiple modules, global errors, data extracts and any combination of these) was becoming too much. It took me too long to come up with this API, but in a future version I might remove the combination functions (expectGlobalAndLocalErrors, expectGlobalAndModuleErrors, ...) in favor of this API.
If you want to try it out, it's a bit annoying but you can. You need to check out the jfmengels/elm-review and jfmengels/node-elm-review projects on your machine, go to the extracts branch on both, then you add your rule to the review/src/ReviewConfig.elm inside of jfmengels/elm-review, and then you can run
LOCAL_ELM_REVIEW_SRC=<path to jfmengels/elm-review>/src <path to jfmengels/node-elm-review>/bin/elm-review --config <path to jfmengels/elm-review>/review --report=json --rules Your.Rule.Name
There are still a few details that I'm iffy on:
Is the JSON output sufficient? Should there be a way to visualize data for a single rule without going through --report=json? It would be cool to see visual graphs in --watch mode for instance. So that if you change your update function, you see the updated graph right away.
There's a very interesting feature for
elm-review
that I've kept in the closet so far and not released yet (I was fine-tuning it and then had to stop working on it for a while).The feature is being able to extract information from
elm-review
's analysis. Currentlyelm-review
is only able to report errors, and that's it. But it has I think a wonderful API to go through a project and extract information, and I think we could make use of it to get information from our project.Here is an example: https://github.com/jfmengels/elm-review/blob/extracts/tests/Review/Rule/DataExtractTest.elm
It's basically just like a normal rule, but it has an additional
|> Rule.withDataExtractor dataExtractor
function, and thisdataExtractor
is a simpleProjectContext -> Json.Encode.Value
function. The idea is that you collect anything that you want using the regularelm-review
visitors, and then you export that to JSON format.To access that information, you then run
elm-review
with--report=json
. If your rule name wasSome.Rule
, then you'd have a JSON object with a key"Some.Rule"
whose value is what you extracted to a JSON value.I think there are a lot of potential use-cases. We could have community-created graphs for our project. For instance you could collect the imports and create a DOT graph for the project, as Elm Analyze used to do. Or you can draw one for specific functions. I would love to see graphs of things like which
Msg
trigger which otherMsg
in an update function, so you can have a much clearer vision of what happens in a module.You could track which CSS classes have been used by your project, and then compare that with your CSS files to clean them up. I have been using this to collect security issues. Given a vulnerable function, find out which other functions use it and are vulnerable by transitivity.
If we support creating files in automatic fixes, we could use the information to generate Elm code (manually or using
elm-codegen
) based on data that is found in our Elm files.With #138, we'd be able to collect a lot of information, and the possibilities become endless.
This PR also includes a new API for testing rules
The combination of things to test for (local errors, errors for multiple modules, global errors, data extracts and any combination of these) was becoming too much. It took me too long to come up with this API, but in a future version I might remove the combination functions (
expectGlobalAndLocalErrors
,expectGlobalAndModuleErrors
, ...) in favor of this API.If you want to try it out, it's a bit annoying but you can. You need to check out the
jfmengels/elm-review
andjfmengels/node-elm-review
projects on your machine, go to theextracts
branch on both, then you add your rule to thereview/src/ReviewConfig.elm
inside ofjfmengels/elm-review
, and then you can runLOCAL_ELM_REVIEW_SRC=<path to jfmengels/elm-review>/src <path to jfmengels/node-elm-review>/bin/elm-review --config <path to jfmengels/elm-review>/review --report=json --rules Your.Rule.Name
There are still a few details that I'm iffy on:
--report=json
? It would be cool to see visual graphs in--watch
mode for instance. So that if you change yourupdate
function, you see the updated graph right away.Feedback welcome!