Closed danielevailati-qa closed 6 years ago
Hah, interesting. Can you send the type of URL that you sent through the plugin?
Fixed this issue with the above commits. While the extension will not be able to scan requests with such invalid URLs, it will not crash any longer when you just browse on sites with such invalid URLs. Or were you planning to scan that website as well?
A pull request is already in the queue with portswigger to update the extension on the BApp store. Thanks for reporting!
I was a scanning an ASP.NET site. I cannot post links of the url because it's a private site. But what i can say is that i was doing an active scan with the extension activated.
Hm, ok. It seems urlparse thought the hostname is an IPv6 address, which was probably not the case. Or were you browsing a website on an IPv6 address?
Here's what the urlsplit did until it raised that error:
def urlsplit(url, scheme='', allow_fragments=True):
"""Parse a URL into 5 components:
<scheme>://<netloc>/<path>?<query>#<fragment>
Return a 5-tuple: (scheme, netloc, path, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes."""
allow_fragments = bool(allow_fragments)
key = url, scheme, allow_fragments, type(url), type(scheme)
cached = _parse_cache.get(key, None)
if cached:
return cached
if len(_parse_cache) >= MAX_CACHE_SIZE: # avoid runaway growth
clear_cache()
netloc = query = fragment = ''
i = url.find(':')
if i > 0:
if url[:i] == 'http': # optimize the common case
scheme = url[:i].lower()
url = url[i+1:]
if url[:2] == '//':
netloc, url = _splitnetloc(url, 2)
if (('[' in netloc and ']' not in netloc) or
(']' in netloc and '[' not in netloc)):
raise ValueError("Invalid IPv6 URL")
# [...]
def _splitnetloc(url, start=0):
delim = len(url) # position of end of domain part of url, default is end
for c in '/?#': # look for delimiters; the order is NOT important
wdelim = url.find(c, start) # find first of this delim
if wdelim >= 0: # if found
delim = min(delim, wdelim) # use earliest delim position
return url[start:delim], url[delim:] # return (domain, rest)
So your URL must have been something like: http://abc[def/ http://abc]def/ http://abc[def? http://abc]def? http://abc[def# http://abc]def#
Which are indeed all invalid IPv6 addresses/invalid hostnames.
So I think this is rather any browser-side code sending such an invalid URL and getting picked up in processHttpMessage, which should now be fixed. Because scanning a request with such a URL is not possible because it is invalid.
If this problem would occur again (after the above fix was applied), I might reconsider and use the Java library to parse URLs, but for now I don't think this should be an issue in the future.