Closed andresriancho closed 6 years ago
my script:
plugins output console,text_file,export_requests,csv_file,html_file output output config text_file set output_file ~/w3af/fullscan/output-w3af.txt set verbose True back output config console set verbose False back output config export_requests set output_file ~/w3af/fullscan/fuzzy_requests-w3af.txt back output config csv_file set output_file ~/w3af/fullscan/output-w3af.csv back output config html_file set output_file ~/w3af/fullscan/report-w3af.html set verbose False set template w3af/w3af/plugins/output/html_file/templates/complete.html back
crawl web_spider crawl
grep all grep
audit all audit
infrastructure all infrastructure
back
http-settings set timeout 1 back
target set target_os unix set target_framework php set target http://mysite/ back
plugins auth generic plugins auth config generic set username myusername@mail.com set password mypass set check_url http://mysite/web/tv/ajax_saul.php set check_string ok set password_field password set username_field email set auth_url http://mysite/web/login/index.php?check=1 back
plugins audit config rfi set use_w3af_site true set listen_address 127.0.0.2 #(RFI disabled, seems it doesn't work ether) back
start
mysite - is 555MB php site with 19613 files total including 311 php files
result for 2GB RAM VM - OOM kernel panic result for 12CPUs, 16GB RAM and 7GB SSD swap VM - OOM kernel panic after 3hrs run, 4.3GB main.db_traces file result for 12CPUs, 16GB RAM and 20GB SSD swap VM - more then 3hrs run without end
Hi there, Andres! I have an idea about the issue (on very high level). I start to use arachni and notice interesting option "--http-response-max-size" which set up by default to 500KB. That make sense, because my shitty site (where I have a memory issues and scans that more than 300 hours!) on most of the pages with scan requests response with 1Mb content, like that: /?a=file%3A%2F%2F%2F..%2F..%2F..%2F..%2F..%2F..%2F%2Fetc%2Fpasswd will return more than 1Mb. So i'm assuming w3af is trying to cache many of them? When arachni just refuse big content by 499 Client Closed Request (Nginx) If you can suggest how I can try the scan with response limitation - I'll do Thanks!
Working on this a little bit, here are some questions and their answers:
Use this revision as baseline for comparing with experiments that have memory profiling: ~/performance-info/43de093/1/
Use this revision when the experiment does not have memory profiling:
~/performance-info/b95156c/0/
Yes. It seems that memory usage is not tightly related with:
Proof can be found at ~/performance-info/8eba19e/0/
Compare these two collector outputs with the baseline and decide:
./wpa-html --debug --output-file output.html ~/performance-info/791f4de/0/ ~/performance-info/791f4de/1/ ~/performance-info/b95156c/0/
Not really, the memory usage is still growing!
Yes.
See comparison at ./wpa-html --debug --output-file output.html ~/performance-info/43de093/1/ ~/performance-info/b95156c/0/
Recommendation: Be careful when enabling these:
TODO: Run tests!
13ec8c4
reduced core input queue
b95156c
baseline
./wpa-html --debug --output-file output.html ~/performance-info/13ec8c4/2/ ~/performance-info/13ec8c4/3/ ~/performance-info/b95156c/0/
The output of this comparison was unclear, so I'm running two new collectors:
./collector --debug --shell-on-fail examples/w3af/ 13ec8c4 --description="Reduced core's worker queue size - Run for two hours"
./collector --debug --shell-on-fail examples/w3af/ b95156c --description="Latest perf experiment without W3AF_CPU_PROFILING , W3AF_MEMORY_PROFILING , W3AF_PYTRACEMALLOC - 120 minutes"
Results are here:
./wpa-html --debug --output-file long-run-output.html ~/performance-info/13ec8c4/0/ ~/performance-info/13ec8c4/1/ ~/performance-info/b95156c/1/ ~/performance-info/b95156c/2/
Comparing them seems difficult, I believe that maybe I need more runs. The memory usage graph seems to indicate that reducing the core worker input queue size increases memory usage (which sounds contrary to all that I believed)
Hey, guys! Any news on this thread?
I tried using the W3AF today, however it consumed my whole 8GB of RAM. I am using Linux (Ubuntu 14.04). Is there any workaround?
I've been working on https://github.com/andresriancho/w3af/tree/feature/smarter-queue , which is related to the high memory usage issue.
Using CachedQueue we'll maintain the same amount of memory usage while reducing any blocks we might experience with the consumer / producer queues. This is very important for the Grep queue, which was blocking HTTP responses from getting to the core here:
Fixed high memory usage https://github.com/andresriancho/w3af/commit/3feb6844805087de1a68aaca34236a7697736211
Using this scan profile triggers an awful high memory usage bug:
The memory grows all the time:
I've been doing some work on this issue at https://github.com/andresriancho/w3af/tree/feature/queue-size-limit-experiment
Tasks