OWASP-Benchmark / BenchmarkJava

OWASP Benchmark is a test suite designed to verify the speed and accuracy of software vulnerability detection tools. A fully runnable web app written in Java, it supports analysis by Static (SAST), Dynamic (DAST), and Runtime (IAST) tools that support Java. The idea is that since it is fully runnable and all the vulnerabilities are actually exploitable, it’s a fair test for any kind of vulnerability detection tool. For more details on this project, please see the OWASP Benchmark Project home page.
https://owasp.org/www-project-benchmark/
GNU General Public License v2.0
669 stars 1.06k forks source link

Assistance Needed with OWASP Benchmark Project for Master's Thesis #201

Closed sammy2201 closed 8 months ago

sammy2201 commented 1 year ago

For my master's thesis, titled "Evaluating and Comparing Automated Web Application Security Testing Tools," my teammate and I have chosen the OWASP Benchmark and Mutillidae website as our subjects of analysis. We have completed the scan using the Mutillidae website, utilizing tools such as OWASP ZAP, Acunetix, Burp Suite, Wapiti, and Skipfish. However, we have encountered some difficulties in properly scanning with tools as well as uncertainty regarding the correct usage of the OWASP Benchmark project.

Specifically, while attempting to scan with Burp Suite, we followed the steps(mentioned in the OWASP benchmark project website) of crawling the entire Benchmark by right-clicking on the Benchmark in the Site Map, selecting Scan -> Open Scan Launcher, clicking on Crawl, and then saving the project. We selected the /Benchmark URL and invoked "Actively scan this branch." However, when we generated a report using createScorecards, we found that the true positives and true negatives were both recorded as zero. We believe we may have missed some crucial steps or misinterpreted the process. Could you please guide how to properly scan with Burp Suite for accurate results? And all the issues we got in Burpsuite are like “input returned in response”.

Additionally, we encountered an error 500 while scanning with Wapiti. We are unsure about the cause of this error and would appreciate any advice or troubleshooting steps you can offer to help us resolve this issue. Furthermore, regarding the OWASP Benchmark project, we have been copying a Benchmark URL(https://localhost:8443/benchmark/) and pasting it into an automated scanning tool for the scan. However, we suspect that this might not be the only step required for analysis. We are relatively new to the field of web application testing and would greatly appreciate any information or instructions you can provide on the correct usage of the OWASP Benchmark project.

Your assistance and guidance in addressing these issues would be immensely helpful for our master's thesis. We are eager to gain a deeper understanding of web application security testing and make meaningful contributions to this field. Thank you in advance for your time and support.

davewichers commented 1 year ago

For any tools where the tool showed it found vulnerabilities (like Burp Suite), can you please send me the results file (dave.wichers@owasp.org). It's very possible the tool vendor changed the file format so we need to fix the scorecard generator for that tool. If so, we can fix that. Or if the results file seems empty/wrong, we can advise you that and hopefully you can figure out the proper way to generate a results file we can parse. As to Wapiti, you'll have to figure that out yourself. You'll need to figure out how to use that tool to scan Benchmark.

davewichers commented 1 year ago

@sammy2201 - This issue is 4+ months old. Can we close it now?