microsoft / accessibility-insights-service

Accessibility Insights Service
Other
69 stars 50 forks source link

[Feature Request] Allow exclusion of URLs from crawler #2208

Open brocktaylor7 opened 2 years ago

brocktaylor7 commented 2 years ago

Is your feature request related to a problem? Please describe.

When using the crawler, there is an input that allows for the use of discovery patterns. However, this appears to be limited to adding URL and not excluding them.

Describe the solution you'd like

There is interest in adding an additional parameter to exclude URLs via regex (or other options) in the same manner as the discovery pattern.

Some possible examples of use cases mentioned: Exclude URLs that include query parameters (everything after /?) Exclude URLs that are in the domain but fall into a specific subdomain or path (like example.com/blog/....)

Describe alternatives you've considered

Additional context

This is user feedback that we received for the ADO Extension. We've had at least 3 teams ask us about this feature. Original issue can be found here: https://github.com/microsoft/accessibility-insights-action/issues/1019

ghost commented 2 years ago

This issue has been marked as ready for team triage; we will triage it in our weekly review and update the issue. Thank you for contributing to Accessibility Insights!

ferBonnin commented 2 years ago

This needs more information in order to understand the work required. @lamaks can you help us understand better if there is an exclusion functionality already existing that needs to be wired up or if this would be something new that needs to be built?

ferBonnin commented 2 years ago

Per conversation with Maxim: the functionality doesn't exist yet and it needs implementation, however we support regular expressions that can be used to exclude urls by pattern (parameter discoveryPatterns), which might help. Currently discovery pattern is to for include ULRs, but can be tailored to match to restricted range/pattern, just as any regular expression in general, so it will select subset based on the expression rules

ferBonnin commented 2 years ago

@brocktaylor7 would this approach help for this request?

ghost commented 2 years ago

The team requires additional author feedback; please review their replies and update this issue accordingly. Thank you for contributing to Accessibility Insights!

ghost commented 2 years ago

This issue has been automatically marked as stale because it is marked as requiring author feedback but has not had any activity for 4 days. It will be closed if no further activity occurs within 3 days of this comment. Thank you for contributing to Accessibility Insights!

ferBonnin commented 2 years ago

per offline conversation, leaving this as needs investigation since the workaorund of using discovery patterns for exclusion is fairly limited to certain scenarios

ghost commented 2 years ago

This issue requires additional investigation by the Accessibility Insights team. When the issue is ready to be triaged again, we will update the issue with the investigation result and add "status: ready for triage". Thank you for contributing to Accessibility Insights!

andrewluebke-ms commented 2 years ago

Hi! My team is attempting to implement the workaround for exclusionary links in the discovery pattern but are having some troubles. I've detailed an example below that we would expect to work but doesn't seem to work (checked with RegEx checker). Could you provide me with some feedback as to where we might be going wrong and/or some examples of exclusionary RegEx in the discovery pattern?

We are using the Azure DevOps Extension v3.

Here is a throwaway example using google and is similar to what we've tried... Dynamic Site URL: https://google.com Discovery pattern: https://google.com/(?!.*FilterMeOut).*

Expected: https://google.com (link is scanned and crawled) https://google.com/FilterMeOut (link is skipped) https://google.com/ValidPath (link is scanned and crawled)

Actual: https://google.com (link is scanned and crawled) https://google.com/FilterMeOut (link is skipped) https://google.com/ValidPath (link is skipped) (only the first URL is scanned and nothing else is discovered)

The only scenario that fully crawls all our web pages is when we use the provided URL format from the docs: https://google.com/[.*]. However, we have quite a lot of links we do not want crawled and so excluding them is critical for us.

Thanks!

brocktaylor7 commented 2 years ago

Hello @andrewluebke-ms,

We use a third-party library (Apify) to handle the crawling portion of the extension. The discoveryPatterns input is passed directly into Apify's PseudoUrl class, so any functionality that will or won't work will be determined by the implementation within Apify.

Here is the documentation to Apify's PseudoUrl which includes their documentation on "special directives" (regexes) that can be passed in via our discoveryPatterns input: https://sdk.apify.com/docs/2.3/api/pseudo-url

There are a few rules about how things should be escaped to be handled properly in the Apify library, so it may be worth looking through their documentation to ensure that the regex you're passing in matches what they expect to receive.

This is meant to be positive matching not exclusionary, and in my own testing seems to be fairly quirky about what does and doesn't work. For example I found capture groups to not consistently work the way I expected them to in a standard JavaScript regex, but Apify's docs don't clearly outline why that would be the case.

We hope to be able to implement a solution for excluding URLs in the future that is much more robust and built to do what we're asking it to do, rather than trying to use a mechanism for a purpose it wasn't meant for. We have this on our radar, but currently don't have an ETA for when a better solution will be in place.