epi052 / feroxbuster

A fast, simple, recursive content discovery tool written in Rust.
https://epi052.github.io/feroxbuster/
MIT License
5.61k stars 472 forks source link

[FEATURE REQUEST] Implementation of scan time limits per individual url when fuzzing in parallel #1070

Closed NotoriousRebel closed 4 months ago

NotoriousRebel commented 5 months ago

Is your feature request related to a problem? Please describe.

For example let's say I do:

cat urls.txt | feroxbuster --stdin --parallel 4 --threads 6 -k --depth 1 --timeout 10 -L 4 -w wordlist.txt -o outfolder

In some cases what ends up happening is even though it's parallel for some urls it leads to a bottleneck, in which some urls in the scan are being fuzzed for 12 hours plus. This can bog down scan times tremendously especially if the urls list contains 100+ urls and multiple urls are causing a bottleneck where they have been stuck scanning for 12 hours+.

Describe the solution you'd like A new flag maybe along the lines of --individual-time-limit or --url-time-limit, or whatever name makes the most sense. What this flag does is when running in parallel it tracks the time each individual url has been being fuzzed and if it exceeds the number set by the flag it gracefully stops the scan and moves onto the next url in the file.

Describe alternatives you've considered

As an alternative what I have had to do is monitor with doing ps aux | grep ferox and writing down which urls are currently running then check throughout the day and if they are still running for an egregious amount of time I do kill -9 PID. This is extremely inefficient and has led to scans taking days when it should be much shorter.

Additional context For some context some cases in which urls can cause a bottleneck that I've seen so far are a redirect to a login page and a url that has a 504 timeout page. I have not been keeping track of other cases but as I disccover them I will edit this issue as needed.

epi052 commented 5 months ago

Howdy!

Does --time-limit satisfy this? I'm guessing not but want to make sure

NotoriousRebel commented 5 months ago

Thanks for the prompt response, I do not believe so as --time-limit would cause the entire scan to stop if a bottleneck occurs which wouldn't be ideal.

epi052 commented 5 months ago

i didn't think it was what you were after, but had to ask, lol.

it's a good suggestion; ill take a look and see how feasible / required effort this is and report back

epi052 commented 5 months ago

also, just for clarification, you're asking for limits placed on an instance of feroxbuster (i.e. one of the parallel processes; at the URL level)? NOT time limits on each individual scan (folder level; each progress bar in a normal scan)

epi052 commented 5 months ago

choose carefully :sweat_smile: the URL level looks pretty easy to implement. haven't explored per-directory yet

NotoriousRebel commented 5 months ago

Correct for the former just limits at the URL level

epi052 commented 5 months ago

Ok, I think that's a pretty simple fix tbh, I'll play with it this evening or tomorrow and see if my thought works out

epi052 commented 5 months ago

pick up a new build from this pipeline and give it a shot; lmk how it goes

https://github.com/epi052/feroxbuster/actions/runs/7755538303

NotoriousRebel commented 5 months ago

Just tested it and even threw in one of the subdomains that is just a redirect to a login page seems to work flawlessly :), cat test.txt | ./testferoxbuster --time-limit 5m --parallel 4 --stdin --threads 6 -L 4 -w wordlist.txt -o test_feroxbuster_limit --json Have you had similar results in your testing?

epi052 commented 5 months ago

Glad to hear it!

Yea, it seemed to work afaict, but I strongly prefer the ticket creator to give fixes a run against their targets, since they're more familiar with what a solution should look like (from a user perspective).

Thanks for checking! I'll get this merged in sometime soon

epi052 commented 4 months ago

@all-contributors add @NotoriousRebel for ideas

allcontributors[bot] commented 4 months ago

@epi052

I've put up a pull request to add @NotoriousRebel! :tada: