Open EdJoPaTo opened 1 year ago
This is a good idea. I'm not sure which solution I prefer. Maybe the last time the robots.txt got crawled could be cached? That way one could implement a behavior that is in between/ a combination of the proposed two solutions. But I'm not sure if that is worth the increase of complexity.
Probing the robots.txt when checking the config seems like a behavior we should have, regardless of the other behaviors in discussion. Not only do we check the robots.txt at least once, we can also leverage this to check if the domains/hosts are actually reachable - a nice extra ux-candy for free. (This ofc. adds the deployment-dependency of being run on a(n) connected / online machine, but given the nature of the tool, this is acceptable imho.)
Okay, I think our best bet is robotstxt. It has zero dependencies and the code looks well commented. An alternative could be robotparser-rs. It depends on url
and percent-encoding
, has slightly less "used-by" but seems to be under more active development when looking at the git hist.
Is your feature request related to a problem? Please describe.
bots on the internet should honor the robots.txt (see RFC 9309
Describe the solution you'd like
Checking the robots.txt of every domain being crawled before crawling the actual content. I think the tool should provide an option to ignore the robots.txt but being annoyed about it on stdout when enabled.
The downside is an additional request to the server on every crawling attempt
Describe alternatives you've considered
Provide an additional subcommand to check the domains in the config for the robots.txt. The user of this tool can run the command to see if they are allowed to do this by the host. This way the additional requests are only done on demand and the user can decide to remove their crawling attempts. Maybe integrate this into the check command which checks the config and error when the robots.txt denies a path?