Closed JayKid closed 6 years ago
Hi @JayKid I appreciate the thought. I'm a little hesitant though, as I'm not sure how reliable such a feature would be. In theory, you could deduplicate issues by checking if the new issue has the same HTML and target properties, but that's not especially reliable. Changes unrelated to the actual element can change how the target selector is created. That might not be the best experience. Matching only on HTML wouldn't work well either, since that the same HTML can be styled completely different, which impacts a bunch of rules.
I'm open to other ideas, but it has to be reliable.
Hi again @WilcoFiers!
I think we might not have understood each other but, anyways, I bring another potential solution that I think is cleaner :)
My new proposition is a combined effort in both axe-cli
and axe-webdriverjs
repos:
in axe-webdriverjs
, I would add a method disableRules
to the AxeBuilder in order to filter the complete list of rules (contained in the runOnly
property) by removing the ones provided as a parameter.
in axe-cli
, I would add a -l
( can be any other letter ) flag in the accepted flags list. This would internally call this "disable" capabilities already available in axe-core
, through the newly added disableRules
method in axe-webdriverjs
With this approach, we don't have to run unnecessary rule validations to later "hide" them on the report like my first proposal suggested.
I can/will code the PRs if you consider this solution appropriate :)
Thanks for the time!
Thanks for the clarification. That is indeed different from what I thought. So you want not just the ability to specify which rules run, but which don't run. Gotcha. That sounds alright to me. Feel free to create a PR for this. I don't have a good idea about what flag to use, so I'll leave that up to you.
Hi, first of all, thanks for the awesome tool(s)!
I am trying to integrate axe-cli in our dev flow, but given that we just introduce it now, we might not be able to fix all issues in the first runs. That would end up with our devs ignoring the output of the tool because they cannot act on the issues anyways.
This led me to look into filtering what rules run or not and I saw that you can specify what rules (or tags) you want to run via parameters. However, in our case, this would mean manually setting all rules except one or two which is not ideal...
I was thinking of creating a PR where I would add the capability of excluding some rules/violations from the report. I had a quick look and some sort of filtering violations returned by the test run here should be fairly simple.
That being said, I'm not sure that would be either interesting to you, or the appropriate way of implementing this feature.
What do you think?