PortSwigger / BChecks

BChecks collection for Burp Suite Professional and Burp Suite Enterprise Edition
https://portswigger.net/burp/documentation/scanner/bchecks
GNU Lesser General Public License v3.0
606 stars 107 forks source link

Enhancement: Support "send request" on "given response" #20

Closed DanaEpp closed 1 year ago

DanaEpp commented 1 year ago

It would really be helpful if we could send a request based on a response condition.

Consider this scenario:

  1. I want to detect whenever a response has a Content-Type of application/json
  2. When detected, I then wish to send a request to fetch a relative URL to that response (ie: GET {latest.response.url.path}/../swagger/swagger.json)

This would allow the detection of potential API docs relative to a possible API endpoint.

The point is we cannot send a request based on the "given response" action. Could that be considered in the future?

A-J-C commented 1 year ago

You should be able to achieve this with the existing language features.

Something along the lines of:

given request:
    if {base.response} contains "application/json" then
        send request:
             path: `{base.response.url.path}/../swagger/swagg.json`

Note, given response was intended to be used in a passive way, only analysing the response. given request gives you access to a {base.response} and {base.request} pair for the case where you want to do a more active style of check.

Let us know how you get on!

DanaEpp commented 1 year ago

Any idea how to prevent accidental abuse here?

Imagine if we saw 5 requests to the same endpoint. Would this not then trigger the same bcheck 5 times? If my array of potential doc paths has like 50 paths to check, every request will trigger 50 more requests.

I could see this killing a scan at scale. "given host then" is close, as it would only run once per host, but this should be on a per subdirectory basis, as it could be nested several subdirs deep.

Thoughts?

A-J-C commented 1 year ago

Assuming that you're running these as part of a full crawl and audit you shouldn't experience that problem. The crawler doesn't blindly pass on every single request it sees to the audit, it attempts to recognise and deduplicate requests.

I'd recommend making a few example test apps and running scans against it to see what happens and sharing any findings and results back here if there's anything that you think would be useful!

DanaEpp commented 1 year ago

I am closing this issue as it works as intended. It does deduplicate as expected.