I would expect a different error message as the mentioned url is valid. From what I understand, it's the combination of the seedUrl, include and exclude that makes it invalid: haven't looked in detail but I suppose it's not including the seed one.
That very long (!) regex doesn't appear to be parsable by JS. The include scope is part of the seed. Can add another error message that will say that the regex is invalid.
This is for browsertrix-crawler
1.1.3
When specifying an incorrect (how?)
--include
param, the crawl refuses to start and complains about an Invalid seed:This works ; adding the following huge
--include
param does not:This results in:
I would expect a different error message as the mentioned
url
is valid. From what I understand, it's the combination of the seedUrl, include and exclude that makes it invalid: haven't looked in detail but I suppose it's not including the seed one.