GoogleChrome / lighthouse

Automated auditing, performance metrics, and best practices for the web.
https://developer.chrome.com/docs/lighthouse/overview/
Apache License 2.0
28.39k stars 9.38k forks source link

[SEO Audits] Integrate robots.txt analysis #4356

Closed rviscomi closed 6 years ago

rviscomi commented 6 years ago

Using a JS-based robots.txt parser (like this one), validate the file itself and apply existing SEO audits whenever applicable.

This integration has two parts:

robots.txt is valid (new audit)

Audit group: Crawling and indexing Description: robots.txt is valid Failure description: robots.txt is not valid Help text: If your robots.txt file is malformed, crawlers may not be able to understand how you want your website to be crawled or indexed. Learn more.

Success conditions:

Page is not blocked from indexing

Add the following success condition:

Note that directives may be applied to the site as a whole or a specific page. Only fail if the current page is blocked from indexing (directly or indirectly).

kdzwinel commented 6 years ago

Do we want to let user know if /robots.txt fails with something like HTTP 500? IMO if the response code is in HTTP 500 - 600 range we can safely report it as an issue.

kdzwinel commented 6 years ago
User-agent: Googlebot
Disallow: / # everything is blocked for googlebot

User-agent: *
Disallow: # but allowed for everyone else

Should we fail in such case? How about robots.txt that is only blocking e.g. Googlebot-Image, or Bing, Yandex, DuckDuckGo? 🤔

rviscomi commented 6 years ago

For consistency with https://github.com/GoogleChrome/lighthouse/issues/3182 let's try to avoid distinguishing between crawlers. If the common case is * UAs, then that's the one we should check. If possible, it would be great to warn saying "you passed, but you're blocking googlebot".

The alternative is to fail the audit when seeing anything resembling noindex, which seems too strict.

I'd also love to see the contents echoed back in the extra info table or similar. Just showing it to users is a sort of manual validation, even if the audit passes. As a secondary benefit, this would be great for data mining later.

kdzwinel commented 6 years ago

For the record, here is the full set of rules I've put together from various sources and implemented in the robots.txt validator:

Rules

  1. request for /robots.txt doesn't return HTTP 500+

  2. robots.txt file is smaller than 500kb (gziped) - this one is WIP

  3. only empty lines, comments and directives (matching "name: value" format) are allowed

  4. only directives from the safelist are allowed:

    'user-agent', 'disallow', // standard 'allow', 'sitemap', // universally supported 'crawl-delay', // yahoo, bing, yandex 'clean-param', 'host', // yandex 'request-rate', 'visit-time', 'noindex' // not officially supported, but used in the wild

  5. there are no 'allow' or 'disallow' directives before 'user-agent'

  6. 'user-agent' can't have empty value

  7. 'sitemap' must provide an absolute URL with http/https/ftp scheme

  8. 'allow' and 'disallow' values should be either: empty, or start with "/" or "*"

  9. 'allow' and 'disallow' should not use '$' in the middle of a value (e.g. "allow: /file$html")

Test

I did run my validator against top 1000 domains and got following errors for 39 of them: https://gist.github.com/kdzwinel/b791967eb66d0e2925ea22c8ca14233a .

Resources

Various docs:

and online validators: