Via a report from the web site owner, we have found that the crawler appears to be ignoring robots.txt. The instructions at https://www.ukmodelshops.co.uk/robots.txt disallow access to /form/... but for some reason it's still crawling there, e.g. from the Kafka crawled topic:
The robots.txt file is quite long and complex, so perhaps the robots.txt parser is having problems with it. Unfortunately, this is quite difficult to debug effectively.
(As an interim measure, I've blocked the DC from crawling that site).
Ugh, it also seems worth noting that the warc_filename and warc_offset appear to null when they should not be. This is also true on the FC so I guess that's a code error. Pretty sure that used to work fine!
Via a report from the web site owner, we have found that the crawler appears to be ignoring robots.txt. The instructions at https://www.ukmodelshops.co.uk/robots.txt disallow access to
/form/...
but for some reason it's still crawling there, e.g. from the Kafka crawled topic:The crawler is fetching robots.txt (as seen at the internal crawled-URL CDX), and can be seen internally at https://www.webarchive.org.uk/act/wayback/en/archive/20230820144656/https://ukmodelshops.co.uk/robots.txt (more recent crawls are identical so are not displayed currently in Wayback.
The robots.txt file is quite long and complex, so perhaps the robots.txt parser is having problems with it. Unfortunately, this is quite difficult to debug effectively.
(As an interim measure, I've blocked the DC from crawling that site).