ffuf / ffuf

Fast web fuzzer written in Go
MIT License
12.23k stars 1.26k forks source link

AutoCalibrate seems to ignore redirects #64

Closed fang0654 closed 4 years ago

fang0654 commented 4 years ago

Hello! When using the awesome -ac option, it seems to ignore 301s and 302s. They don't get added to the filters. As an example, I made a sample wordlist with:

test1
admin
stuff
othertest

Running this against http://www.irccloud.com (which has a public bounty program) returns:

ffuf -w /tmp/wordlist.txt -u http://www.irccloud.com/FUZZ -ac -x http://127.0.0.1:8080

othertest               [Status: 301, Size: 0, Words: 1]
test1                   [Status: 301, Size: 0, Words: 1]
stuff                   [Status: 301, Size: 0, Words: 1]
admin                   [Status: 301, Size: 0, Words: 1]
:: Progress: [4/4] :: 0 req/sec :: Duration: [0:00:00] :: Errors: 0 ::

In the proxy, I also see the three calibration requests, and they all return 301 with the same size and words as well. I'm going to poke around in the source a bit and see if I can figure out a fix, but go is definitely not my strongpoint!

fang0654 commented 4 years ago

I think I found the issue. It has nothing to do with redirects per se, but the autocalibrate will only accept a match if either the words or size are greater than 1. I dropped it down to greater than 0 and that seems to have resolved it.

I can definitely see where this would be problematic in that if a site is returning 302/1/0 for everything invalid but then returning 301/1/0 for valid directories, you'd miss them. Then again, you kind of run that risk with any filter. I'll still submit a pull request, although I think a better solution might be to add in maps for filters from autocalibration. This way you can match on "code and size" or "code and words" instead of just size or words.

joohoi commented 4 years ago

I think we should just - instead of allowing the autocalibration there - error out with informative message, telling the user that because the server is returning redirects for everything, the autocalibration cannot be used for this target.

This is because almost always in this kind of case it requires careful manual inspection to be able to catch relevant results. What do you think?

fang0654 commented 4 years ago

From my own experience, when you have blanket redirects, you usually have one of two scenarios:

  1. A blanket rule where all traffic is being redirected, i.e. http -> https.

  2. A redirect to something such as a login page, or a retired application that redirects to some company's replacement application/website

For the first case, you would just get no hits if it was caught in the autocalibrate, which would be expected behaviour (IMHO).

In the second case, you can sometimes still hit explicit files and get back 200/401/403 and find stuff. You'd even still get the 301s from folders since those are usually a higher word count. This would also be expected behaviour.

I've been using it this way for the past week and have been getting good results (and much cleaner!).

joohoi commented 4 years ago

I think the first use case would be better off with exiting early, if ffuf is run as a part of automation pipeline it should be easy to catch and if it's run manually, the user will be informed early on about the situation at hand.

For the second one I think the user is way better off by calibrating the matchers and filters by hand and knowing what's going on. -ac would effectively be hiding this fact from the user and they might miss something because of this.

fang0654 commented 4 years ago

What would be the difference between that and a site that redirects with a body? (which gets filtered now). A lot of sites redirect as a default action for invalid pages, most of them do so with some sort of body. While case 1 would definitely make sense to not waste time on, you don't know that is the case without actually scanning. The only way to be absolutely certain is to look at the server config, and if you are at that point there really isn't much point in scanning.

Not trying to be adversarial, it just doesn't make much sense to me why "redirect without a body" is a special case. I made the change before starting on a pretty large external, and have managed to uncover a lot of "retired" applications, where anything (other than valid folders) in the root of the site would return a 302 to the new application (without a body). Under those folders I still found the old application, which has resulted in some juicy vulnerabilities. Just exiting on the filter would have missed all of that, and could easily get lost in the noise of sites that are erroring out for other reasons (ie no longer up).

joohoi commented 4 years ago

What would be the difference between that and a site that redirects with a body? (which gets filtered now). A lot of sites redirect as a default action for invalid pages, most of them do so with some sort of body. ... Not trying to be adversarial, it just doesn't make much sense to me why "redirect without a body" is a special case.

I was thinking that after establishing this flow of erroring out with an understandable error message in the no-body case, this would be extended to the currently working redirect-with-a-body situation.

Now thinking more of this and reading your example - I'm starting to lean more towards to this change you are proposing (and wrote a PR for). This is mostly due to the fact that any kind of autocalibration should not be trusted as being 100% effective, and as you said, erroring out might leave even more things undiscovered.

I would however love to print out a warning message to stdout if this kind of situation is discovered (including the redirects with a body too of course) to notify the user that there might be something they should look more closely into.

Thanks for arguing for your case here, I think you are right. I'm reopening the PR #65

fang0654 commented 4 years ago

Thanks for listening! And I agree with you 100% that there should still be a message to stdout!

joohoi commented 4 years ago

This was resolved in #80 , closing the issue. Thanks again for bringing this up!