Closed danzee1 closed 4 years ago
Hi there. Thanks for the compliment!
What's your thought on piping the output through waybackurls? I'm trying to think of a way, but waybackurls uses a domain for input and gives urls as output. The best way, I think, to use word lists is with fuzzers / brute forcers. For example:
wget -nd -r example.com -q -E -R woff,jpg,gif,eot,ttf,svg,png,otf,pdf,exe,zip,rar,tgz,docx,ico,jpeg cat .|wwwordlist --nh 4 --cl --co --max 10 -full|ffuf -recursion -w - -u https://sub.example.com/FUZZ -r
Or are you trying to get the content from URLs from waybackurls? Try using: cat domains.txt|waybackurls| xargs -n1 wget -qO - |wwwordlist -full I've added this example to the readme, thanks for the tip ;)
In the mean time I've added an option to do an analysis of the full text file with -full.
Hi @Zarcolio
Nice.. I was thinking of the workflow as below..
Subdomain Scan by subfinder or any other tool == > "httprobe" on them ==> "waybackurls" or "gau" on all hosts ==>> Remove Duplicates along with meta data filetypes == > Now pipe that output to wwwordlist...
Thanks for the response !!
Tried "cat domains.txt|waybackurls| xargs -n1 wget -qO - |wwwordlist -full"
But as expected "wget" is expecting "URL" & giving Error.
That's strange... What's the platform you're running? And what's the exact error message you're getting?
I'm running an updated Kali Rolling and this exact command works like a charm.
Also, added another example to the readme section for generating a wordlist if a target has code on Github. Clone the repo and fire this command inside the cloned folder:
find . -type f -exec strings {} +|wwwordlist -full
Does this work for you?
That's strange... What's the platform you're running? And what's the exact error message you're getting?
I'm running an updated Kali Rolling and this exact command works like a charm.
I am using Ubuntu Latest. and "wget" asks for "URL" thats the Error.
Downloading Ubuntu 20.04 LTS now, but I'll be away in a little while. Will check it out when I'm back. I'm quite curious now, hopefully I can find a solution :)
Hi @Zarcolio No need to download Ubuntu... I have again tested and now it showing that its working.... Though I've not seen output yet
Good to hear it's working, depending on the quantity of input, it may take a while to complete. I'll try to optimize the code this week, hopefully it'll help.
Did you manage to get some output from waybackurls wwwordlist?
Closing the isssue, if you're problem has been resolved.
Hi again, would mind testing it again? I've rewrote the code, so it's faster and has less bugs :)
Hi @Zarcolio Oh.. SO sorry... I will now download, test and will report back.. Thanks
Go this error:
xargs: unmatched double quote; by default quotes are special to xargs unless you use the -0 option
This means that some URL from waybackurls contains a double quote.
Please try it with urlcoding:
cat domains.txt | waybackurls | urlcoding -e | xargs -n1 wget -T 2 -qO - | wwwordlist
Also, I use 2ulb a lot to create a symbolic link for a script.This saves you from typing python3, and the full path + the extension of the script every time ;)
Or even better/faster, try it with parallel: cat domains.txt | waybackurls | urlcoding -e | parallel --pipe xargs -n1 wget -T 2 -qO - | wwwordlist
Hi @Zarcolio
Cool tool as I've just run it. I was thinking to pipe the output of waybackurls. Is it possible?
Sincerely,