Open mgoins01 opened 6 years ago
Think I may have found the source of the 'EOFError's. Saw this forum post on BountySource: https://www.bountysource.com/issues/43351676-question-disk-full-100gb-how-to-adjust-scan that mentions issues w/EOFError's, and large numbers of Arachni_Support_DatabaseQueue* files in /tmp. I, too, had over 17000 files in /tmp ranging in size between 100MB & 625MB's...
In the forum post, it was mentioned that 2 settings may be playing a role: --scope-dom-event-limit --scope-dom-event-inheritance-limit
With this in mind, I ran a series of tests against the failing website, all with 30 minute timeouts: with only change being --scope-dom-event-limit set to 10, then 50, then 100, then 1000. All with no EOFError, or memory allocation errors.
Last test, was with --scope-dom-event-limit 1000, with an 8-hour timebomb - am happy to report - no Errors!
Am not sure what a reasonable limit would be, but 1000 seems to work in this case.
Am running the latest code:
Couple questions?
are there plans to add support for --scope-dom-event-limit to the WebUI? or, the ability to point to a saved scan profile? Noticed it appears to be missing from the WebUI profile screen.
has support been added for --scope-dom-event-inheritance-limit to either the command line, or WebUI? Am not seeing the option supported with Arachni 1.5.1 Command Line, or WebUI 0.5.12.
Thanks for the great work! Can't wait to get this running in production, against our entire external web footprint.
As for reasonable limits, that's basically up to each webapp.
I found Arachni_Support_DatabaseQueue[pid][*][1-121] files in /tmp too. But it doesn't work with --scope-dom-event-limit setting. I found it when the target server is unconnected , it will produce the Arachni_Support_DatabaseQueue* files . I use the arachni_rest_server for testing . :)
Some will be produced but most are pages to be audited and browser jobs to be performed. Scope limiting options will naturally result in less workload and in less of these files being created.
Great work on Arachni - just recently became aware of it & have been testing to eventually incorporate it into our cybersec/vulnerability program. Am seeing EOFError's when running Arachni against one of our main company websites & have been unable to figure out what's going on. The log that follows was run as a single instance DIRECT scan, with a 2-hour timeout. The Arachni environment is running in AWS, with a grid of 3 machines. The security group is set for wide open outbound, and ssh/http open from mgmt machines, and all traffic open between arachni machines. am running t2.medium machines - Amazon Linux - 2CPUx4GB memory w/50 GB disk. PostGres database is configured.