adamdehaven / fetchurls

A bash script to spider a site, follow links, and fetch urls (with built-in filtering) into a generated text file.
https://www.adamdehaven.com/blog/easily-crawl-a-website-and-fetch-all-urls-with-a-shell-script/
MIT License
127 stars 45 forks source link

Error: <domain> is unresponsive or is not a valid URL. #9

Closed Lilja closed 4 years ago

Lilja commented 4 years ago

All the different domains I'm trying ends up with the error message above.

Is there a debug mode or something similar I can try out?

image

adamdehaven commented 4 years ago

@Lilja Good catch 👍! The script was erroneously catching some 3xx HTTP statuses in a block that terminated the script. I've made an update, and will issue a new release momentarily.

In addition, based on the output in your screenshot, you may want to upgrade the version of grep on your computer to 3.1. It looks like the version you're on likely doesn't support the -E, --extended-regexp flag.

adamdehaven commented 4 years ago

@Lilja go ahead and grab the newest release; you should be good to go :tada: