Closed psiinon closed 1 year ago
Hi @psiinon, thank you for your attention, I really like the ZAP scanner!
We will publis it on Dockerhub when all the test cases will be completed.
I try to answer all the questions.
We will try to keep wavsep updated, even if we have the same problem of the wavsep developer, the time. Unfortunately, we work on this in our free time without funding. But we have in strict liaison with the University of Naples (I collaborate with the research team of Network Security), so our ideas would be to let our students help us to integrate new test cases.
Old vulnerabilities should still work, I tested them some time ago. I need to check OS-CMDi and XXE, that do not correctly work. About the test with modern browser, we actually do not consider the effectiveness in terms of exploitability against the browsers, but it could be an interesting research point.
We do not have a specific roadmap for the future, but I can share with you the research ideas that
our company and my research team would like to perform. The benchmarking topic is an interesting one,
and Reinforced Wavsep is the first step we intended to perform before increasing the effectiveness of current solutions.
We intend to proceed with three main topics:
Define a taxonomy of web vulnerabilities and define coverage criteria to evaluate the effectiveness of a benchmark for web scanners.
Show that current benchmarks are limited in terms of covered tests.
Find a way to generate new test cases automatically by using the taxonomy.
Add the test cases to the Reinforced Wavsep.
Analyze the OWASP benchmark and merge the two projects.
As the OWASP Benchmark offers a lot of useful utilities, such as the crawler (that we also created in the utils
folder) and the scorecard, we would like to explore
the possibility of integrating the two benchmark platforms.
Create a microservices-based evaluation platform.
Wavsep and OWASP still have a lot of limitations, some of which are shown on the OWASP benchmark page:
Our final idea would be to formalize the benchmark framework design and create a multi-platform platform
that includes several languages, several databases and, eventually Windows and Linux operating systems.
The idea would be to leverage the container-based flexibility and capabilities to create a multi-service stack
that covers all the test cases.
All the OWASP features, such as the scorecard, parsing, etc., will be integrated into the platform. A management web application will allow the generation of a test suite and the user/researcher could test its scanners and methodologies.
Those are challenging tasks and will require a lot of effort.
If you know some company or person interested in these ideas and that could contribute in some way, please let me know.
Regards.
Thank you for such a detailed reply - and it all sounds great! I love the idea of getting students to contribute test cases - its a great learning opportunity for them. We'll try to get ZAP to test this daily asap, but for that we would need the docker image on docker hub. Is there any reason you wanted to complete the test cases first? If you can publish one now then we can test it with ZAP and give you feedback. Failing that we'll have to publish it ourselves, which might take us longer 😉
Hi @psiinon I have just pushed it ;-)
I wanted to check for CMDi and XXE tests before publishing, but if you can test it I will proceed to fix them!
Many thanks!
Hi @psiinon, I have implemented a feature for the OWASP BenchmarkUtils project that allows to use the scorecards against other benchmarks, as Wavsep. To use it:
mvn install
to install the customized utils version. It adds an autogenerated
number id for test in order to generalize the test case mapping. results
folder. Insert in the BenchmarkJava folder:
Run the scorecard generator as follows:
mvn -Djava.awt.headless=true org.owasp:benchmarkutils-maven-plugin:create-scorecard -DconfigFile=wavsepscoringconfig.yaml
This should be able to generate the scores against the Wavsep benchmark.
If everything works I am going to PR this feature.
Anyway, it depends on another PR that provides a large refactoring of the parsers.
Hey @giper45, that sounds interesting. Last time I checked the Benchmark scoring was not suitable for us to use. Uploading a results file and getting a report doesnt work for us. It would be much better to have a file while told us which vulns were present on which URLs - we could then check our own results and update out metrics that way. Happy to explain in more detail if you want...
Hey @psiinon we use har
files to implement the crawler and generate a csv file compliant with the OWASP Benchmark. I start from the urls and generate test case names, so I think that your needed file can be easily implemented. If you want to give me more details, you can reach me on Telegram .
Publishing rwavsep to dockerhub would make it much easier to use. FYI I published the old wavsep image here: https://hub.docker.com/r/owaspvwad/wavsep/
FYI we run ZAP against a range of vulnerable apps and publish the results: https://www.zaproxy.org/docs/scans/
We used to run ZAP against wavsep but had stopped after it was not maintained for so long. I'd love to start using it again but it will take a bit of work on our side. So some questions for you:
Thanks for reanimating this project and hope you keep improving it!