adamdehaven / fetchurls

A bash script to spider a site, follow links, and fetch urls (with built-in filtering) into a generated text file.
https://www.adamdehaven.com/blog/easily-crawl-a-website-and-fetch-all-urls-with-a-shell-script/
MIT License
127 stars 45 forks source link

[Feature Request] username and password #8

Closed TonyStark closed 4 years ago

TonyStark commented 4 years ago

suppose if site asking username password like http://usename:password@example.com then how to fetch urls from that site?

adamdehaven commented 4 years ago

@TonyStark How about that for quick turnaround? 🎉 I was already working on some changes, so this was timely.

TonyStark commented 4 years ago

Thanks for quick improvement, i tried as you mentioned in readme, but seems not working

$bash fetchurls.sh -u admin -p pass123

Fetch a list of unique URLs for a domain.

Enter the full domain URL ( https://example.com )
Domain URL: https://admin.example.io/files

ERROR: 'https://admin.example.io/files' is unresponsive or is not a valid URL.
       Ensure the site is up by checking in your browser, then try again.
adamdehaven commented 4 years ago

@TonyStark Hmm... it worked in my tests; however, I didn't have an actual auth URL to test with. If you have something set up where I can try hitting the URL with credentials, I can take a look.

TonyStark commented 4 years ago

ok give me your email, i will try to manage for you Edit: i can give you only site because i cant share login. so you can figure out why site showing unresponsive in ternimal

adamdehaven commented 4 years ago

I passed --username to wget instead of --user. Fixing now... will update in a moment.

adamdehaven commented 4 years ago

Fixed implementation with v3.1.1

TonyStark commented 4 years ago

Thanks Adam, Its working Fine :)