Closed GoogleCodeExporter closed 9 years ago
Hi :-)
I'll make my research, but is anybody aware whether DirBuster supports
bruteforcing pages along with subdirectories? E.g., we could specify extensions
(.jsf, .jsp, .asp) for such pages to look for, like in some UI screen
parameter, and the DirBuster will find for us:
<bruteforcedir>/admin
<bruteforcedir>/admin.asp
<bruteforcedir>/happy
<bruteforcedir>/happy/admin
<bruteforcedir>/happy/admin/kumar.jsf
This could be useful based on my experience with HP WebInspect, the latter
could provide better results because it searches for pages with extensions as
well.
Cheers,
Sergey
Original comment by serge....@gmail.com
on 18 Feb 2013 at 2:57
Hi.
Yes, it does support test of extensions with the provided list of files and
"directories". But it doesn't support test of extensions with the
directories/files found while parsing.
Best regards.
Original comment by THC...@gmail.com
on 18 Feb 2013 at 3:44
Hi,
I'll take a look at this as I'm much interested in this functionality. Pages
with common names are encountered often as well.
There's been also an issue when bruteforcing for site 'host' would populate a
new site containing bruteforcing results spelling like 'host:80' or 'host:443'
(depending on whether site is served over http or https).
This is also really inconvenient compared to WebInspect, but this might require
a new issue raised, I'll look into raising it. I'd love to have a parameter
displayed as an icon (or differently colored link) specifying whether the URL
has been manually navigated, crawled or brute forced.
Cheers,
Sergey
Original comment by serge....@gmail.com
on 19 Feb 2013 at 8:28
Hi.
OK.
The appending of the port is DirBuster's behaviour but it's an easy and
straightforward change (to only append the port when it's not a default port).
The "Sites" tab shows an icon based on the source of the URL (crawling (spider
icon), forced browse (hammer icon)), is there any other place that should show
other icon based on the source of the URL?
Best regards.
Original comment by THC...@gmail.com
on 19 Feb 2013 at 12:11
Thanks)
I haven't noticed the icons because the sites (port and no-port) have been
always different. Anyway, I've been thinking also about treating differently
the URLs added manually, but I don't think it should be considered with utmost
attention.
I'll update the 2.0.0 branch with the fix for "default port", or do you think
it's worth going directly to trunk?
I remember you're the best developer to handle any matters related to
DirBuster, but if you feel like these changes are of not great importance I
could take a time to implement them, sharing you with diff, what do you think
(I wouldn't like committing this directly to trunk/2.0.0)?
Cheers,
Sergey
Original comment by serge....@gmail.com
on 19 Feb 2013 at 3:22
Hi.
The URLs added manually are the ones accessed with the manual request editor?
It's better to apply the changes only to the trunk (the changes may not be
suitable for the next 2.0.x release).
That's saying too much. The required changes are easy and straightforward, but
even if they weren't, I wouldn't see any problem with anyone else implementing
it. I would gladly help and/or review the changes (if help/review wanted).
Sounds good to me. Attach, in a new issue, the patch with the changes or send
it directly to me (whatever you think it's better).
Best regards.
Original comment by THC...@gmail.com
on 22 Feb 2013 at 5:24
You're right, the changes seem to be straightforward.
Anyway, attached is a diff for enabling file browsing by defining file
extensions in the Force Browse options (comma-separated).
The UI is just a demo (but still quite usable). I have tested the extension,
and it seem to be working. Although, I have encountered the following problems:
- crawled URLs a not getting brute forced for some reason
I'd love to see DirBuster catching up URLs crawled by ajax/traditional spider,
as well as built-in spider...
Looking forward to your comments.
Cheers,
Sergey
Original comment by serge....@gmail.com
on 22 Feb 2013 at 11:19
Attachments:
Hi.
What crawled URLs are not being brute forced? the directories?
Some comments regarding the changes:
- BruteForce.java
- On the line 109, probably would be better to also check if the file extensions string is not empty. When the option is enabled and the file extensions string is empty an "empty" extension is created leading DirBuster to append a dot to the forced browse strings.
- On the line 110, consider to remove all the whitespace characters that are after the end of the statement;
- On the line 112, consider to use the method Collections.emptyList() instead of creating a new instance;
- On the line 149, consider to pass the initial capacity ("extensions" size) when creating the Vector.
- BruteForceParam.java
- On the line 22, the import should be removed as it's not used;
- The method parse() should be changed to load the (new) saved options;
- Messages.properties
- On the line 4, consider to move the new message to under the message "bruteforce.options.label.addfile" to keep them in alphabetic order.
- OptionsBruteForcePanel.java
- On the lines 87, 227 and 229, consider to remove all the whitespace characters that are after the end of the statement;
- On the method initParam(Object):
- The extensions from the BruteForceParam are not being set to the file extensions text field ("txtFileExtensions");
- The edits of the file extensions text field should be discarded (ZapTextField.discardAllEdits()).
Let me know if you have any question about the comments.
P.S. You should create a new issue for this change.
Best regards.
Original comment by THC...@gmail.com
on 25 Feb 2013 at 5:53
Hi,
Many thanks for the thorough review of the patch :-)
I have created a new issue for this, attaching a patch with all the suggested
changes applied:
https://code.google.com/p/zaproxy/issues/detail?id=537&start=200
>> What crawled URLs are not being brute forced? the directories?
Actually, I've found DirBuster behaving somewhat unexpectedly (in the matter of
crawling, not bruteforcing). I'd love it to work the following way:
- for brute forcing site or directory - existing behavior
- for brute forcing directory and all subsdirectories - existing behavior, plus bruteforcing all subdirectories added manually of found by crawler already.
This is what I've expected from forced browsing of subdirectories after all.
Cheers,
Sergey
Cheers,
Sergey
Original comment by serge....@gmail.com
on 26 Feb 2013 at 12:44
Hi.
Are you saying to pass the (children) URLs found/accessed by/through ZAP to
DirBuster when forced browsing a directory and children?
Best regards.
Original comment by THC...@gmail.com
on 27 Feb 2013 at 3:37
Hi,
I'll provide an example of what behavior I'd expect.
1. The existing site structure in the Sites tab is the following:
http://www.example.org/
- foo/
- bar/
- zed/
- gamma/
- lucky_you/
- youshouldntseethis65/
- alma/
2. I select to Force Browse directory (including subdirectories)
http://www.example.org/bar/
3. Force Browse adds the following directories to the Force Browse queue:
http://www.example.org/bar/
http://www.example.org/bar/zed/
http://www.example.org/bar/gamma/
http://www.example.org/bar/gamma/lucky_you/
http://www.example.org/bar/youshouldntseethis65/
Those (along with directories found by Force Browse) should be brute forced.
Even if Froce Browse doesn't find e.g.
http://www.example.org/bar/youshouldntseethis65/, it should be brute forced
anyway, from my point of view, as ZAP is aware about this location already, and
there could be hidden directories or resources down the way.
Cheers,
Sergey
Original comment by serge....@gmail.com
on 27 Feb 2013 at 7:19
Hi.
Yes, that makes sense. I've raised an issue to fix that (Issue 544).
Best regards.
Original comment by THC...@gmail.com
on 2 Mar 2013 at 5:22
Hi,
Thanks a lot for your consideration, I'll track this issue as well.
Cheers,
Sergey
Original comment by serge....@gmail.com
on 5 Mar 2013 at 11:04
Any update(s) on this?
Original comment by kingtho...@gmail.com
on 5 Jun 2014 at 1:00
Nope.
Original comment by THC...@gmail.com
on 5 Jun 2014 at 1:36
ZAP has been migrated to github
This issue will be on github issues with the same ID:
https://github.com/zaproxy/zaproxy/issues
Original comment by psii...@gmail.com
on 5 Jun 2015 at 9:17
Original issue reported on code.google.com by
psii...@gmail.com
on 27 Dec 2011 at 11:14