modzero / mod0BurpUploadScanner

HTTP file upload scanner for Burp Proxy
Other
479 stars 138 forks source link

UploadScanner bug #12

Closed danielevailati-qa closed 6 years ago

danielevailati-qa commented 6 years ago
Traceback (most recent call last):
  File "C:\Users\d.vailati\AppData\Roaming\BurpSuite\bapps\b2244cbb6953442cb3c82fa0a0d908fa\UploadScanner.py", line 798, in processHttpMessage
    matchers = self.dl_matchers.get_matchers_for_url(url)
  File "C:\Users\d.vailati\AppData\Roaming\BurpSuite\bapps\b2244cbb6953442cb3c82fa0a0d908fa\UploadScanner.py", line 6595, in get_matchers_for_url
    hostport = self._get_host(url)
  File "C:\Users\d.vailati\AppData\Roaming\BurpSuite\bapps\b2244cbb6953442cb3c82fa0a0d908fa\UploadScanner.py", line 6622, in _get_host
    x = urlparse.urlparse(url)
  File "C:\Users\d.vailati\Desktop\SecurityTesting\jython-standalone-2.7.0.jar\Lib\urlparse.py", line 140, in urlparse
    tuple = urlsplit(url, scheme, allow_fragments)
  File "C:\Users\d.vailati\Desktop\SecurityTesting\jython-standalone-2.7.0.jar\Lib\urlparse.py", line 188, in urlsplit
    raise ValueError("Invalid IPv6 URL")
ValueError: Invalid IPv6 URL
floyd-fuh commented 6 years ago

Hah, interesting. Can you send the type of URL that you sent through the plugin?

floyd-fuh commented 6 years ago

Fixed this issue with the above commits. While the extension will not be able to scan requests with such invalid URLs, it will not crash any longer when you just browse on sites with such invalid URLs. Or were you planning to scan that website as well?

A pull request is already in the queue with portswigger to update the extension on the BApp store. Thanks for reporting!

danielevailati-qa commented 6 years ago

I was a scanning an ASP.NET site. I cannot post links of the url because it's a private site. But what i can say is that i was doing an active scan with the extension activated.

floyd-fuh commented 6 years ago

Hm, ok. It seems urlparse thought the hostname is an IPv6 address, which was probably not the case. Or were you browsing a website on an IPv6 address?

Here's what the urlsplit did until it raised that error:

def urlsplit(url, scheme='', allow_fragments=True):
    """Parse a URL into 5 components:
    <scheme>://<netloc>/<path>?<query>#<fragment>
    Return a 5-tuple: (scheme, netloc, path, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes."""
    allow_fragments = bool(allow_fragments)
    key = url, scheme, allow_fragments, type(url), type(scheme)
    cached = _parse_cache.get(key, None)
    if cached:
        return cached
    if len(_parse_cache) >= MAX_CACHE_SIZE: # avoid runaway growth
        clear_cache()
    netloc = query = fragment = ''
    i = url.find(':')
    if i > 0:
        if url[:i] == 'http': # optimize the common case
            scheme = url[:i].lower()
            url = url[i+1:]
            if url[:2] == '//':
                netloc, url = _splitnetloc(url, 2)
                if (('[' in netloc and ']' not in netloc) or
                        (']' in netloc and '[' not in netloc)):
                    raise ValueError("Invalid IPv6 URL")
# [...]
def _splitnetloc(url, start=0):
    delim = len(url)   # position of end of domain part of url, default is end
    for c in '/?#':    # look for delimiters; the order is NOT important
        wdelim = url.find(c, start)        # find first of this delim
        if wdelim >= 0:                    # if found
            delim = min(delim, wdelim)     # use earliest delim position
    return url[start:delim], url[delim:] # return (domain, rest)

So your URL must have been something like: http://abc[def/ http://abc]def/ http://abc[def? http://abc]def? http://abc[def# http://abc]def#

Which are indeed all invalid IPv6 addresses/invalid hostnames.

So I think this is rather any browser-side code sending such an invalid URL and getting picked up in processHttpMessage, which should now be fixed. Because scanning a request with such a URL is not possible because it is invalid.

If this problem would occur again (after the above fix was applied), I might reconsider and use the Java library to parse URLs, but for now I don't think this should be an issue in the future.